Start up an Oracle Database in Kubernetes with Oracle REST Data Services and Database Actions in no time at all!

Hi everyone! Today I want to show you how easy it is to get an instance of Oracle up and running in Kubernetes, with Oracle REST Data Services and Database Actions using the Oracle Database Operator for Kubernetes

Let’s assume you have a Kubernetes cluster running and you have configured kubectl access to the cluster.

The first step is to install Cert Manager, which is a pre-requisite for the Oracle Database Operator:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

It will take probably less than a minute to start up. You can check on it with this command:

kubectl -n cert-manager get pods
NAME                                      READY   STATUS    RESTARTS   AGE
cert-manager-8f49b54c8-xxd5v              1/1     Running   0          7d5h
cert-manager-cainjector-678548868-x5ljp   1/1     Running   0          7d5h
cert-manager-webhook-898d9d956-57m76      1/1     Running   0          7d5h

Next, install the Oracle Database Operator itself:

kubectl apply -f https://raw.githubusercontent.com/oracle/oracle-database-operator/main/oracle-database-operator.yaml

That will start up pretty quickly too, and you can check with this command:

kubectl -n oracle-database-operator-system get pods 

Let’s create a Single Instance Database. The Oracle Database Operator will let you create other types of databases too, including sharded and multitenant databases, and to manage cloud database instances like Autonomous Database and Database Cloud Service. But today, I’m going to stick with a simple single instance.

Here’s the Kubernetes YAML file to describe the database we want, I called this sidb.yaml:

apiVersion: database.oracle.com/v1alpha1
kind: SingleInstanceDatabase
metadata:
  name: sidb-sample
  namespace: default
spec:
  sid: ORCL1
  edition: enterprise
  adminPassword:
    secretName: db-admin-secret
    secretKey: oracle_pwd
    keepSecret: true
  charset: AL32UTF8
  pdbName: orclpdb1
  flashBack: false
  archiveLog: false
  forceLog: false
  enableTCPS: false
  tcpsCertRenewInterval: 8760h
  image:
    pullFrom: container-registry.oracle.com/database/enterprise:latest
    pullSecrets: oracle-container-registry-secret
  persistence:
    size: 100Gi
    storageClass: "oci-bv"
    accessMode: "ReadWriteOnce"
  loadBalancer: false
  serviceAccountName: default
  replicas: 1

If you have not before, head over to Oracle Container Registry and go to the Database group, and accept the license agreement for the Enterprise option. You’ll also want to create a Kubernetes secret with your credentials so it can pull the image:

kubectl create secret docker-registry oracle-container-registry-secret \
  --docker-server=container-registry.oracle.com \
  --docker-username='me@example.com' \
  --docker-password='whatever' \
  --docker-email='me@example.com'

You will want to change the storageClass to match your cluster. I am using Oracle Container Engine for Kuberentes in this example, so I used the “oci-bv” storage class. If you are using a different flavor of Kubernetes you should check what storage classes are available and use one of them.

This YAML describes a databse with the SID ORCL1 and a PDB called orclpdb1. It will get the password for sys, pdbadmin, etc., from a Kubernetes secret – so let’s create that:

kubectl create secret generic db-admin-secret --from-literal=oracle_pwd=Welcome12345

Now we can create the database by applying that YAML file to our cluster:

kubectl apply -f sidb.yaml

It will take few minutes to start up fully – it has to pull the image (which took 3m30s on my cluster, for the “enterprise” image which is the biggest one), create the database instance the first time (mine took 8m), and apply any patches that are required (just over 1m for me). Subsequent startups will be much faster of course (I stopped it by scaling to zero replicas, then started it again by scaling back to one replica and it reached ready/healthy status in about 90s). For reference, my cluster had two nodes each with one OCPU and 16 GB of RAM. You can check on the progress with this command:

kubectl get singleinstancedatabases -o wide -w

As the database starts up, you will see the connection string and other fields populate in the output.

Now, let’s add Oracle REST Data Services. Here’s a Kubernetes YAML file that describes what we want, I called this ords.yaml:

apiVersion: database.oracle.com/v1alpha1
kind: OracleRestDataService
metadata:
  name: ords-sample
  namespace: default
spec:
  databaseRef: "sidb-sample"
  adminPassword:
    secretName: db-admin-secret
  ordsPassword:
    secretName: ords-secret
  image:
    pullFrom: container-registry.oracle.com/database/ords:21.4.2-gh
  restEnableSchemas:
  - schemaName: mark
    enable: true
    urlMapping: mark

You’ll need to create a secret to hold the password, for example:

kubectl create secret generic ords-secret --from-literal=oracle_pwd=Welcome12345

You can apply that to your cluster with this command:

kubectl apply -f ords.yaml

And we can check on progress with this command:

kubectl get oraclerestdataservice -w

As it becomes ready, you will see the URLs for the Database API REST endpoint and for Database Actions. Mine took about 2m to reach ready/healthy status.

If your nodes are on a private network, the quickest way to access the REST APIs and Database Actions is to use a port forward. You can get the name of the ORDS pod and start a port forwarding session with commands like this:

kubectl get pods
kubectl port-forward pod/ords-sample-g4wc7 8443

Now you can hit the Database API REST endpoint with curl:

curl -k  https://localhost:8443/ords/orclpdb1/_/db-api/stable/
{"links":[{"rel":"self","href":"https://localhost:8443/ords/orclpdb1/_/db-api/stable/"},{"rel":"describedby","href":"https://localhost:8443/ords/orclpdb1/_/db-api/stable/metadata-catalog/"}]}

And you can access Database Actions at this address: http://localhost:8443/ords/sql-developer

On the login page, enter ORCLPDB1 for the PDB Name and mark as the user. Then on the password page enter Welcome12345, and you are good to go!

While we are at it, let’s also get SQLcl access to the database.

Again, we can use port forwarding to access the database from outside the cluster:

 kubectl port-forward svc/sidb-sample 1521 &

And then connect from SQLcl (if you have not checked out SQLcl yet, you should, it’s got cool features like command line completion and history):

sql mark/Welcome12345@//localhost:1521/orclpdb1


SQLcl: Release 22.2 Production on Mon May 01 14:32:57 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Last Successful login time: Mon May 01 2023 14:32:56 -04:00

Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> select * from dual;

DUMMY
________
X

SQL>

There you go! That was super quick and easy! Enjoy!

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

New 23c version of Kafka-compatible Java APIs for Transactional Event Queues published

We just published the new 23c version of the Kafka-compatible Java APIs for Transactional Event Queues in Maven Central, and I wanted to show you how to use them! If you are not familiar with these APIs – they basically allow you to use the standard Kafka Java API with Transactaional Event Queues acting as the Kafka broker. The only things that you would need to change are the broker address, and you need to use the Oracle versions of KafkaProducer and KafkaConsumer – other than that, your existing Kafka Java code should just work!

We also published updated source and sink Kafka connectors for Transactional Event Queues – but I’ll cover those in a separate post.

Let’s build a Kafka producer and consumer using the updated Kafka-compatible APIs.

Prepare the database

The first thing we want to do is start up the Oracle 23c Free Database. This is very easy to do in a container using a command like this:

docker run --name free23c -d -p 1521:1521 -e ORACLE_PWD=Welcome12345 container-registry.oracle.com/database/free:latest

This will pull the image and start up the database with a listener on port 1521. It will also create a pluggable database (a database container) called “FREEPDB1” and will set the admin passwords to the password you specified on this command.

You can tail the logs to see when the database is ready to use:

docker logs -f free23c

(look for this message...)
#########################
DATABASE IS READY TO USE!
#########################

Also, grab the IP address of the container, we’ll need that to connect to the database:

docker inspect free23c | grep IPA
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAMConfig": null,
                    "IPAddress": "172.17.0.2",

To set up the necessary permissions, you’ll need to connect to the database with a client. If you don’t have one already, I’d recommend trying the new SQLcl CLI which you can download here. Start it up and connect to the database like this (note that your IP address and password may be different):

sql sys/Welcome12345@//172.17.0.2:1521/freepdb1 as sysdba


SQLcl: Release 22.2 Production on Tue Apr 11 12:36:24 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL>

Now, run these commands to create a user called “mark” and give it the necessary privileges:


SQL> create user mark identified by Welcome12345;

User MARK created.

SQL> grant resource, connect, unlimited tablespace to mark;

Grant succeeded.

SQL> grant execute on dbms_aq to mark;

Grant succeeded.

SQL> grant execute on dbms_aqadm to mark;

Grant succeeded.

SQL> grant execute on dbms_aqin to mark;

Grant succeeded.

SQL> grant execute on dbms_aqjms_internal to mark;

Grant succeeded.

SQL> grant execute on dbms_teqk to mark;

Grant succeeded.

SQL> grant execute on DBMS_RESOURCE_MANAGER to mark;

Grant succeeded.

SQL> grant select_catalog_role to mark;

Grant succeeded.

SQL> grant select on sys.aq$_queue_shards to mark;

Grant succeeded.

SQL> grant select on user_queue_partition_assignment_table to mark;

Grant succeeded.

SQL> exec  dbms_teqk.AQ$_GRANT_PRIV_FOR_REPL('MARK');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> quit;

Create a Kafka topic and consumer group using these statements. Note that you could also do this from the Java code, or using the Kafka-compatible Transactional Event Queues REST API (which I wrote about in this post):

begin
  -- Creates a topic named TEQ with 5 partitions and 7 days of retention time
  dbms_teqk.aq$_create_kafka_topic('TEQ', 5); 
  -- Creates a Consumer Group CG1 for Topic TEQ
  dbms_aqadm.add_subscriber('TEQ', subscriber => sys.aq$_agent('CG1', null, null));
end;
/

You should note that the dbms_teqk package is likely to be renamed in the GA release of Oracle Database 23c, but for the Oracle Database 23c Free – Developer Release you can use it.

Ok, we are ready to start on our Java code!

Create a Java project

Let’s create a Maven POM file (pom.xml) and add the dependencies we need for this application. I’ve also iunclude some profiles to make it easy to run the two main entry points we will create – the producer, and the consumer. Here’s the content for the pom.xml. Note that I have excluded the osdt_core and osdt_cert transitive dependencies, since we are not using a wallet or SSL in this example, so we do not need those libraries:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example</groupId>
    <artifactId>okafka-demo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>okafka-demo</name>

    <description>OKafka demo</description>

    <properties>
        <java.version>17</java.version>
        <maven.compiler.target>17</maven.compiler.target>
        <maven.compiler.source>17</maven.compiler.source>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.oracle.database.messaging</groupId>
            <artifactId>okafka</artifactId>
            <version>23.2.0.0</version>
            <exclusions>
                <exclusion>
                    <artifactId>osdt_core</artifactId>
                    <groupId>com.oracle.database.security</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>osdt_cert</artifactId>
                    <groupId>com.oracle.database.security</groupId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>

    <profiles>
        <profile>
            <id>consumer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.SimpleConsumerOKafka</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
        <profile>
            <id>producer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.SimpleProducerOKafka</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>

</project>

This is a pretty straightforward POM. I just set the project’s coordinates, declared my one dependency, and then created the two profiles so I can run the code easily.

Next, we are going to need a file called ojdbc.properties in the same directory as the POM with this content:

user=mark
password=Welcome12345

The KafkaProducer and KafkaConsumer will use this to connect to the database.

Create the consumer

Ok, now let’s create our consumer. In a directory called src/main/jaba/com/example, create a new Java file called SimpleConsumerOKafka.java with the following content:

package com.example;

import java.util.Properties;
import java.time.Duration;
import java.util.Arrays;

import org.oracle.okafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;

public class SimpleConsumerOKafka {
  public static void main(String[] args) {
    // set the required properties
    Properties props = new Properties();
    props.put("bootstrap.servers", "172.17.0.2:1521");
    props.put("group.id" , "CG1");
    props.put("enable.auto.commit","false");
    props.put("max.poll.records", 100);

    props.put("key.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");

    props.put("oracle.service.name", "freepdb1");
    props.put("oracle.net.tns_admin", "."); 
    props.put("security.protocol","PLAINTEXT");

    // create the consumer
    Consumer<String , String> consumer = new KafkaConsumer<String, String>(props);		
    consumer.subscribe(Arrays.asList("TEQ"));
 
    int expectedMsgCnt = 4000;
    int msgCnt = 0;
    long startTime = 0;

    // consume messages
    try {
      startTime = System.currentTimeMillis();
      while(true) {
        try {
          ConsumerRecords <String, String> records = 
            consumer.poll(Duration.ofMillis(10_000));

          for (ConsumerRecord<String, String> record : records) {
            System.out.printf("partition = %d, offset = %d, key = %s, value = %s\n ", 
              record.partition(), record.offset(), record.key(), record.value());
            for(Header h: record.headers()) {
              System.out.println("Header: " + h.toString());
            }
          }

          // commit the records we received
          if (records != null && records.count() > 0) {
            msgCnt += records.count();
            System.out.println("Committing records " + records.count());
            try {
              consumer.commitSync();
            } catch(Exception e) {
              System.out.println("Exception in commit " + e.getMessage());
              continue;
            }

            // if we got all the messages we expected, then exit
            if (msgCnt >= expectedMsgCnt ) {
              System.out.println("Received " + msgCnt + ". Expected " +
               expectedMsgCnt +". Exiting Now.");
              break;
            }
          } else {
            System.out.println("No records fetched. Retrying...");
            Thread.sleep(1000);
          }
        } catch(Exception e) {
          System.out.println("Inner Exception " + e.getMessage());
          throw e;
        }			
      }
    } catch(Exception e) {
      System.out.println("Exception from consumer " + e);
      e.printStackTrace();
    } finally {
      long runDuration = System.currentTimeMillis() - startTime;
      System.out.println("Application closing Consumer. Run duration " + 
        runDuration + " ms");
      consumer.close();
    }
  }
}

Let’s walk through this code together.

The first thing we do is prepare the properties for the KafkaConsumer. This is fairly standard, though notice that the bootstrap.servers property contains the address of your database listener:

    Properties props = new Properties();
    props.put("bootstrap.servers", "172.17.0.2:1521");
    props.put("group.id" , "CG1");
    props.put("enable.auto.commit","false");
    props.put("max.poll.records", 100);

    props.put("key.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");

Then, we add some Oracle-specific properties – oracle.service.name is the name of the service we are connecting to, in our case this is freepdb1; oracle.net.tns_admin needs to point to the directory where we put our ojdbc.properties file; and security.protocol controls whether we are using SSL, or not, as in this case:

    props.put("oracle.service.name", "freepdb1");
    props.put("oracle.net.tns_admin", "."); 
    props.put("security.protocol","PLAINTEXT");

With that done, we can create the KafkaConsumer and subscribe to a topic. Note that we use the Oracle version of KafkaConsumer which is basically just a wrapper that understand those extra Oracle-specific properites:

import org.oracle.okafka.clients.consumer.KafkaConsumer;

// ...

    Consumer<String , String> consumer = new KafkaConsumer<String, String>(props);		
    consumer.subscribe(Arrays.asList("TEQ"));

The rest of the code is standard Kafka code that polls for records, prints out any it finds, commits them, and then loops until it has received the number of records it expected and then exits.

Run the consumer

We can build and run the consumer with this command:

mvn exec:exec -P consumer

It will connect to the database and start polling for records, of course there won’t be any yet because we have not created the producer. It should output a message like this about every ten seconds:

No records fetched. Retrying...

Let’s write that producer!

Create the producer

In a directory called src/main/jaba/com/example, create a new Java file called SimpleProducerOKafka.java with the following content:

package com.example;

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.header.internals.RecordHeader;

import java.util.Properties;
import java.util.concurrent.Future;

public class SimpleProducerOKafka {
  public static void main(String[] args) {
    long startTime = 0;
    try {
      // set the required properties
      Properties props = new Properties();
      props.put("bootstrap.servers", "172.17.0.2:1521");
      props.put("key.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
      props.put("value.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
      props.put("batch.size", "5000");
      props.put("linger.ms","500");

      props.put("oracle.service.name", "freepdb1");
      props.put("oracle.net.tns_admin", ".");
      props.put("security.protocol","PLAINTEXT");

      // create the producer
      Producer<String, String> producer = new KafkaProducer<String, String>(props);

      Future<RecordMetadata> lastFuture = null;
      int msgCnt = 4000;
      startTime = System.currentTimeMillis();

      // send the messages
      for (int i = 0; i < msgCnt; i++) {
        RecordHeader rH1 = new RecordHeader("CLIENT_ID", "FIRST_CLIENT".getBytes());
        RecordHeader rH2 = new RecordHeader("REPLY_TO", "TOPIC_M5".getBytes());
				
        ProducerRecord<String, String> producerRecord = 
          new ProducerRecord<String, String>(
            "TEQ", String.valueOf(i), "Test message "+ i
          );
        producerRecord.headers().add(rH1).add(rH2);
				
        lastFuture = producer.send(producerRecord);
      }
			
      // wait for the last one to finish
      lastFuture.get();

      // print summary
      long runTime = System.currentTimeMillis() - startTime;
      System.out.println("Produced "+ msgCnt +" messages in " + runTime + "ms.");
      producer.close();
    }		
    catch(Exception e) {
      System.out.println("Caught exception: " + e );
      e.printStackTrace();
    }
  }
}

This code is quite similar to the consumer. We first set up the Kafka properties, including the Oracle-specific ones. Then we create a KafkaProducer, again using the Oracle version which understands those extra properties. After that we just loop and produce the desired number of records.

Make sure your consumer is still running (or restart it) and then build and run the producer with this command:

mvn exec:exec -P producer

When you do this, it will run for a short time and then print a message like this to let you know it is done:

Produced 4000 messages in 1955ms.

Now take a look at the output in the consumer window. You should see quite a lot of output there. Here’s a short snippet from the end:

partition = 0, offset = 23047, key = 3998, value = Test message 3998
 Header: RecordHeader(key = CLIENT_ID, value = [70, 73, 82, 83, 84, 95, 67, 76, 73, 69, 78, 84])
 Header: RecordHeader(key = REPLY_TO, value = [84, 79, 80, 73, 67, 95, 77, 53])
Committing records 27
Received 4000. Expected 4000. Exiting Now.
Application closing Consumer. Run duration 510201 ms

It prints out a message for each record it finds, including the partition ID, the offset, and the key and value. It them prints out the headers. You will also see commit messages, and at the end it prints out how many records it found and how long it ws running for. I left mine running while I got the producer ready to go, so it shows a fairly long duration 🙂 But you can run it again and start the producer immediately after it and you will see a much shorter run duration.

Well, there you go! That’s a Kafka producer and consumer using the new updated 23c version of the Kafka-compatible Java API for Transactional Event Queues. Stay tuned for more!

Posted in Uncategorized | Tagged , , , , | Leave a comment

Spring Boot Starters for Oracle updated

Hi everyone. We have just published some updates to the Spring Boot Starters for Oracle Database – we added a starter for UCP (Universal Connection Pool) for Spring 3.0.2. This makes it easy to access the Oracle Database from a Spring Boot application – just two steps!

Add a dependency to your Maven POM file (or equivalent)

Here’s the dependency to add:

<dependency>
   <groupId>com.oracle.database.spring</groupId>
   <artifactId>oracle-spring-boot-starter-ucp</artifactId>
   <version>3.0.2</version>   <!-- or 2.7.7 for Spring Boot 2.x --> 
   <type>pom</type>
</dependency>

Add the datasource properties to your Spring Boot application.yaml

Here’s an example, assuming you are also using Spring Data JPA:

spring:
  application:
    name: aqjms
  jpa:
    hibernate:
      ddl-auto: update
    properties:
      hibernate:
        dialect: org.hibernate.dialect.Oracle12cDialect
        format_sql: true
      show-sql: true
  datasource:
    url: jdbc:oracle:thin:@//1.2.3.4:1521/pdb1
    username: someuser
    password: somepassword
    driver-class-name: oracle.jdbc.OracleDriver
    type: oracle.ucp.jdbc.PoolDataSource
    oracleucp:
      connection-factory-class-name: oracle.jdbc.pool.OracleDataSource
      connection-pool-name: AccountConnectionPool
      initial-pool-size: 15
      min-pool-size: 10
      max-pool-size: 30

That’s super easy, right?

We are working to add more Spring Boot Starters for Oracle Database to make it even easier to use, and to make sure we cover all the versions you need! Stay tuned for more updates!

p.s. If you use Spring Boot and Oracle Database, be sure to check out Oracle Backend for Spring Boot!

Posted in Uncategorized | Tagged , , , , | Leave a comment

Oracle Backend for Spring Boot (and Parse Platform) introductory video plublished!

We just published a short YouTube video that introduces the Oracle Backend for Spring Boot (and Parse Platform) which makes it super easy to develop, run and manage Spring Boot microservices and mobile applications leveraging all the power of Oracle’s converged database.

I hope you can check it out!

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Implementing the Transactional Outbox pattern using Transactional Event Queues and JMS

Hi, in this post I want to provide an example of how to implement the Transactional Outbox pattern using Transactional Event Queues and JMS with the new Oracle Database 23c Free – Developer Release I mentioned in my last post.

In the Transactional Outbox pattern, we have a microservice that needs to perform a database operation (like an insert) and send a message, and either both or neither of these need to happen.

Unlike other messaging providers, Transactional Event Queues is built-in to the Oracle Database and has the unique advantage of being able to expose the underlying database transaction to your application. This allows us perform database and messaging operations in the same transaction – which is exactly what we need to implement this pattern.

Prepare the database

The first thing we want to do is start up the Oracle 23c Free Database. This is very easy to do in a container using a command like this:

docker run --name free23c -d -p 1521:1521 -e ORACLE_PWD=Welcome12345 container-registry.oracle.com/database/free:latest

This will pull the image and start up the database with a listener on port 1521. It will also create a pluggable database (a database container) called “FREEPDB1” and will set the admin passwords to the password you specified on this command.

You can tail the logs to see when the database is ready to use:

docker logs -f free23c

(look for this message...)
#########################
DATABASE IS READY TO USE!
#########################

Also, grab the IP address of the container, we’ll need that to connect to the database:

docker inspect free23c | grep IPA
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAMConfig": null,
                    "IPAddress": "172.17.0.2",

To set up the necessary permissions, you’ll need to connect to the database with a client. If you don’t have one already, I’d recommend trying the new SQLcl CLI which you can download here. Start it up and connect to the database like this (note that your IP address and password may be different):

sql sys/Welcome12345@//172.17.0.2:1521/freepdb1 as sysdba


SQLcl: Release 22.2 Production on Tue Apr 11 12:36:24 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL>

Now, run these commands to create a user called “mark” and give it the necessary privileges:


SQL> create user mark identified by Welcome12345;

User MARK created.

SQL> grant resource , connect, unlimited tablespace to mark;

Grant succeeded.

SQL> grant execute on dbms_aq to mark;

Grant succeeded.

SQL> grant execute on dbms_aqadm to mark;

Grant succeeded.

SQL> grant execute on dbms_aqin to mark;

Grant succeeded.

SQL> grant execute on dbms_aqjms_internal to mark;

Grant succeeded.

SQL> grant execute on dbms_teqk to mark;

Grant succeeded.

SQL> grant execute on DBMS_RESOURCE_MANAGER to mark;

Grant succeeded.

SQL> grant select_catalog_role to mark;

Grant succeeded.

SQL> grant select on sys.aq$_queue_shards to mark;

Grant succeeded.

SQL> grant select on user_queue_partition_assignment_table to mark;

Grant succeeded.

SQL> exec  dbms_teqk.AQ$_GRANT_PRIV_FOR_REPL('MARK');

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> quit;

Ok, we are ready to start on our Java code!

Create the Java project

If you have read my posts before, you’ll know I like to use Maven for my Java projects. Let’s create a Maven POM file (pom.xml) and add the dependencies we need for this application. I’ve also iunclude some profiles to make it easy to run the three main entry points we will create – one to create a queue, one to consume messages, and finally the transactional outbox implementation. Here’s the content for the pom.xml:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
         https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>txoutbox</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>txoutbox</name>

	<properties>
		<java.version>17</java.version>
		<maven.compiler.source>17</maven.compiler.source>
		<maven.compiler.target>17</maven.compiler.target>
        </properties>

    <dependencies>
        <dependency>
            <groupId>com.oracle.database.messaging</groupId>
            <artifactId>aqapi</artifactId>
            <version>21.3.0.0</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <scope>compile</scope>
            <version>2.14.2</version>
        </dependency>
        <dependency>
            <groupId>javax.transaction</groupId>
            <artifactId>jta</artifactId>
            <version>1.1</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>jakarta.jms</groupId>
            <artifactId>jakarta.jms-api</artifactId>
            <scope>compile</scope>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>jakarta.management.j2ee</groupId>
            <artifactId>jakarta.management.j2ee-api</artifactId>
            <scope>compile</scope>
            <version>1.1.4</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.jdbc</groupId>
            <artifactId>ojdbc11</artifactId>
            <scope>compile</scope>
            <version>21.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.jdbc</groupId>
            <artifactId>ucp</artifactId>
            <scope>compile</scope>
            <version>21.3.0.0</version>
        </dependency>
    </dependencies>

    <profiles>
        <profile>
            <id>publish</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.Publish</argument>
                                <argument>jack</argument>
                                <argument>jack@jack.com</argument>
                                <argument>0</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>

        <profile>
            <id>consume</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.Consume</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>

        <profile>
            <id>createq</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.example.CreateTxEventQ</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>

</project>

I won’t go into a heap of detail on this or the first two Java classes, since they are fairly standard and I have talked about very similiar things before in older posts including this one for example. I will go into detail on the transactional outbox implementation though, don’t worry!

Create a Java class to create the queue

We are going to need a queue to put messages on, so let me show you how to do that in Java. Transactional Event Queues support various types of queues and payloads. This example shows how to create a queue that uses the JMS format. Create a file called src/main/com/example/CreateTxEventQ.java with this content:

package com.example;

import java.sql.SQLException;

import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.AQ.AQQueueTableProperty;
import oracle.jms.AQjmsDestination;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class CreateTxEventQ {

    private static String username = "mark";
    private static String password = "Welcome12345";
    private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";

    public static void main(String[] args) throws AQException, SQLException, JMSException {
        
        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(password);

        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = (AQjmsSession) conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create properties
        AQQueueTableProperty props = new AQQueueTableProperty("SYS.AQ$_JMS_TEXT_MESAGE");
        props.setMultiConsumer(true);
        props.setPayloadType("SYS.AQ$_JMS_TEXT_MESSAGE");

        // create queue table, topic and start it
        Destination myTeq = ((AQjmsSession) session).createJMSTransactionalEventQueue("my_txeventq", true);
        ((AQjmsDestination) myTeq).start(session, true, true);

    }

}

As you read through this, you’ll see I’ve just hardcoded the username, password and url for convenience in this file (and the others in this post), of course we’d never do that in real life, would we 🙂 You should also notice that we get a connection, then create the queue table first, set the consumer type (multiple, i.e. pub/sub – so a JMS Topic) and the format (JMS) and the queue itself, and then start it up. Easy, right?

You can run this and create the queue with this command:

mvn exec:exec -Pcreateq

If you want to see the queue in the database, you can log in using that mark user you created and run a query:

$ sql mark/Welcome12345@//172.17.0.2:1521/freepdb1

SQLcl: Release 22.2 Production on Tue Apr 11 15:18:58 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.


Last Successful login time: Tue Apr 11 2023 15:18:59 -04:00

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL> select * from USER_QUEUES ;

NAME           QUEUE_TABLE         QID QUEUE_TYPE         MAX_RETRIES    RETRY_DELAY ENQUEUE_ENABLED    DEQUEUE_ENABLED    RETENTION    USER_COMMENT    NETWORK_NAME    SHARDED    QUEUE_CATEGORY               RECIPIENTS
______________ ______________ ________ _______________ ______________ ______________ __________________ __________________ ____________ _______________ _______________ __________ ____________________________ _____________
MY_TXEVENTQ    MY_TXEVENTQ       78567 NORMAL_QUEUE                 5              0   YES                YES              0                                            TRUE       Transactional Event Queue    MULTIPLE

While you’re there, let’s also create a table so we have somewhere to perform the database insert operation:

create table customer ( name varchar2(256), email varchar2(256) ); 

Create the consumer

Let’s create the consumer next. This will be a new Java file in the same directory called Consume.java. Here’s the content:

package com.example;

import java.sql.SQLException;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicSubscriber;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Consume {

    private static String username = "mark";
    private static String password = "Welcome12345";
    private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";
    private static String topicName = "my_txeventq";

    public static void main(String[] args) throws AQException, SQLException, JMSException {

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(password);

        // create a JMS topic connection and session
        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = 
           (AQjmsSession) conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create a subscriber on the topic
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicSubscriber subscriber = 
           (AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "my_subscriber");

        System.out.println("Waiting for messages...");

        // wait forever for messages to arrive and print them out
        while (true) {

            // the 1_000 is a one second timeout
            AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000); 
            if (message != null) {
                if (message.getText() != null) {
                    System.out.println(message.getText());
                } else {
                    System.out.println();
                }
            }
            session.commit();
        }
    }

}

This one is a fairly standard JMS consumer. It is going to create a subscription to that topic we just created, and wait for messages to arrive, and then just print the content on the screen. Nice and simple. You can run this with this command:

mvn exec:exec -Pconsume

Leave that running so that you see messages as they are produced. Later, when you run the transactional outbox producer, run it in a different window so that you can see what happens in the consumer.

Implement the Transactional Outbox pattern

Yay! The fun part! Here’s the code for this class, which will go into a new Java file in the same dircetory called Publish.java. I’ll walk through this code step by step.

package com.example;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsAgent;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicPublisher;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Publish {

    private static String username = "mark";
    private static String password = "Welcome12345";
    private static String url = "jdbc:oracle:thin:@//172.17.0.2:1521/freepdb1";
    private static String topicName = "my_txeventq";

    public static void main(String[] args) throws JMSException, SQLException {

        AQjmsTopicPublisher publisher = null;
        TopicSession session = null;
        TopicConnection tconn = null;
        Connection conn = null;

        if (args.length != 3) {
            System.err.println("""
                You must provide 3 arguments - name, email and failure mode
                failure mode:
                  0    do not fail
                  1    fail before insert and publish
                  2    fail after insert, before publish
                  3    fail after insert and publlsh
            """);
        }
        String name = args[0];
        String email = args[1];
        int failMode = Integer.parseInt(args[2]);

        try {
            // create a topic session
            PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
            ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
            ds.setURL(url);
            ds.setUser(username);
            ds.setPassword(password);

            // create a JMS topic connection and session
            TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
            tconn = tcf.createTopicConnection();
            tconn.start();

            // open a Transactional session
            session = (AQjmsSession) tconn.createSession(true, Session.AUTO_ACKNOWLEDGE);

            // also get the JDBC connection
            conn = ((AQjmsSession) session).getDBConnection();
            conn.setAutoCommit(false);

            // if failMode = 1, fail here
            if (failMode == 1) throw new Exception();

            // first, perform the database operation
            PreparedStatement stmt = conn.prepareStatement("insert into customer (name, email) values (?, ?)");
            stmt.setString(1,  name);
            stmt.setString(2, email);
            stmt.executeUpdate();
            System.out.println("row inserted");

            // if failMode = 2, fail here
            if (failMode == 2) throw new Exception();

            // second, publish the message
            Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
            publisher = (AQjmsTopicPublisher) session.createPublisher(topic);

            AQjmsTextMessage message = (AQjmsTextMessage) session.createTextMessage("new customer with name=" + name + " and email=" + email);
            publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("my_subscription", null) });
            System.out.println("message sent");

            // if failMode = 3, fail here
            if (failMode == 3) throw new Exception();        

            // we didn't fail - so commit the transaction
            if (failMode == 0) session.commit();

        } catch (Exception e) {
            System.err.println("rolling back");
            if (conn != null) conn.rollback();
        } finally {
            // clean up
            if (publisher != null) publisher.close();
            if (session != null) session.close();
            if (tconn != null) tconn.close();
        }
    }

}

Ok, so the overall structure of the code is as follows:

First, we are going to start a transaction. Then we will perform two operations – insert a record into the customer table, and send a message on a topic. If eevrything works as expected, we will commit the transaction. Of course, if there is a failure at any point, we will rollback instead. Notice the arrows are labeled with numbers – in the code I have included failure points that correspond to each of these arrows.

At the start of the main method, we are going to check we have the expected arguments — the name and email, and the point at which to fail, i.e., which of those arrows to simulate a failure at. A “0” indicates that no failure should be simulated. So if we run the code with “mark mark@example.com 2” as the input, we expect it to fail on the “2” arrow – after it inserted the row in the table and before it sent the message on the topic.

Next we get both a JMS Connection and a JDBC Connection. This is important because it allows us to have a single transaction. Note the following lines:

// open a Transactional session
session = (AQjmsSession) tconn.createSession(true, Session.AUTO_ACKNOWLEDGE);

also get the JDBC connection
conn = ((AQjmsSession) session).getDBConnection();
conn.setAutoCommit(false);

We explicity set the “auto commit” to false on the JDBC connection – we want to control exactly if and when work is commited, we do not want any automatic commits to occur. And on the JMS session we set the “transacted” parameter to true. That’s the first parameter in the createSession() call. This tells it to use the same database transaction.

Next, you will notice that we simulate a failure if the failure point was “1”:

if (failMode == 1) throw new Exception();

If an exception is thrown at this point (or any point), we’d expect to see no new rows in the database and no messages recieved by the consumer. We can check the table with this query:

select * from customer;

And you will see output like this in the consumer window every time a message is produced, so if you do not see that output – no messages:

new customer with name=jack and email=jack@jack.com

You can also check directly in the database with this query:

select * from my_txeventq;

The next thing you will see is a standard JDBC Prepared Statement to insert a row into the customer table. Notice that I don’t commit yet.

PreparedStatement stmt = conn.prepareStatement("insert into customer (name, email) values (?, ?)");
stmt.setString(1,  name);
stmt.setString(2, email);
stmt.executeUpdate();
System.out.println("row inserted");

Then you will see failure point “2”.

And next, we have the code to publish a message on the topic:

AQjmsTextMessage message = (AQjmsTextMessage) session.createTextMessage("new customer with name=" + name + " and email=" + email);
publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("my_subscription", null) });
System.out.println("message sent");

Then you’ll see failure point “3” and then finally the commit!

Next, notice that the catch block contains a rollback on the database connection. You don’t have to rollback the JMS session as well – since they are in the same transaction, this one rollback call is enough to rollback all of the operations.

Run the Transactional Outbox code

Now we’re ready to run the code! First, notice in the POM file we created a profile called “publish” whic contains the following configuration:

<configuration>
  <executable>java</executable>
  <arguments>
    <argument>-Doracle.jdbc.fanEnabled=false</argument>
    <argument>-classpath</argument>
    <classpath/>
    <argument>com.example.Publish</argument>
    <argument>jack</argument>
    <argument>jack@jack.com</argument>
    <argument>0</argument>
  </arguments>
</configuration>

The last three arguments are the name, email and the failure point. If you go ahead and run it as is (with failure point 0, meaning no failure) then it should actually get all the way through to the commit. You should see output in the consumer window to let you know the message was produced, and you can check the table in the database to see the new record in there. Run the code like this:

mvn exec:exec -Pproduce

Of course, you’ll see a record in the table and the message.

If you now edit the POM file and change that last argument from 0 to any of the other options and run it again, you’ll notice that it rolls back and you do not get a new record in the table or a message produced on the topic.

How do I know it really worked?

If you’d like to experiment and convince yourself it really is working, try something like commenting out failure point 2 like this:

// if (failMode == 2) throw new Exception();

When you run the code again, you will now see that there is a row in the database that was not rolled back (because the failure never occured and the exception was never thrown) but the message was never sent (becuase the commit was never run due to failMode being 2, not 0).

If you tweak the failure points you can easily convince yourself that it is in fact working just as expected 🙂

So there you go, that’s the Transactional Outbox pattern implemented using Transactional Event Queues with Oracle Database 23c Free – that was pretty easy, right? Hope you enjoyed it, and see you soon!

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Big news today – Oracle Database 23c Free—Developer Release just released!

Hi everyone! Big news today, just announced at Oracle CloudWorld in Singapore!

The new Oracle Database 23c Free – Developer Release is now available.

Oracle Database 23c Free – Developer Release is the first release of the next-generation Oracle Database, allowing developers a head-start on building applications with innovative 23c features that simplify development of modern data-driven apps. The entire  feature set of Oracle Database 23c is planned to be generally available within the next 12 months.

It has heaps of new developer-focused features and its completely free! And easy to download and use!

My two favorite features are:

  • the new JSON Relational Duality Views which allow you to create a JSON document representation from a number of existing tables, and they are read/write! So you can use JSON in your applications and have the underlying data stored in relational tables. Of course you can store it in JSON too if you want to!
  • JavaScript Stored Procedures, or as I like to think of them – in-database microservices which can scale to zero, with fast startup and scaling, and resource management to prevent noisy neighbors!

I look forward to writing posts about those, and some other exicting new features really soon.

You can find it here: https://www.oracle.com/database/free/

Posted in Uncategorized | Tagged , , , , , | 1 Comment

Session catalog for DevLive Level Up 2023 released!

Hi again! In this earlier post, I mentioned that I am speaking at Level Up 2023. The session catalog has just been released on the event website. You can find my sessions in this stream:

Data strategies for developers – Sessions at a glance

I hope to see some of you there!

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

I’m speaking at Level Up 2023

Hi! I am going to be speaking at the Level Up 2023 event at Oracle Redwood Shores in March. I will talking about our new Developer Previews for both Oracle Backend for Spring Boot and Oracle Backend for Parse Platform, and running a hands on lab where we will use those to build a “Cloud Banking” application in Spring Boot complete with a web and mobile front end user interface. In the lab we’ll explore topics like service discovery, external configuration, workflow, API management, fault tolerance and observability.

If you’re in the Bay Area and you’d like to attend in person – or if you’d like to attend from anywhere digitally – you can find more information and register here:

https://developer.oracle.com/community/events/devlive-level-up-march-2023.html

Posted in Uncategorized | Tagged , , , , , , , , | 1 Comment

A first Spring Boot microservice with Oracle

In this post, I want to walk through creating a first simple Spring Boot microservice using Oracle. If you want to follow along, see this earlier post about setting up a development environment.

I want to create a “customer” microservice that I can use to create/register customers, and to get customer details. I want the customer information to be stored in my Oracle database. I am going to create a dedicated schema for this microservice, where it will keep its data. I could create a separate pluggable database, but that seems a little excessive given the simplicity of this service.

So my “customer” data will have the following attributes:

  • Customer ID
  • First name
  • Surname
  • Email address

My service will have endpoints to:

  • Create a customer
  • List all customers
  • Get a customer by ID

I am going to use Spring 3.0.0 with Java 17 and Maven. Spring 3.0.0 was just released (when I started writing this post) and has support for GraalVM native images and better observability and tracing.

Create the project

Let’s start by creating a project. If you set up your development environment like mine, with Visual Studio Code and the Spring Extension Pack, you can type Ctrl+Shift+P to bring up the actions and type in “Spring Init” to find the “Spring Initializr: Create a Maven project” action, then hit enter.

It will now ask you a series of questions. Here’s how I set up my project:

  • Spring Boot Version = 3.0.0
  • Language = Java
  • Group ID = com.redstack
  • Artifact ID = customer
  • Packaging = JAR
  • Java version = 17
  • Dependencies:
    • Spring Web
    • Spring Data JPA
    • Oracle Driver

After that, it will ask you which directory to create the project in. Once you answer all the questions, it will create the project for you and then give you the option to open it (in a new Visual Studio Code window.)

Note: If you prefer, you can go to the Spring Initializr website instead and answer the same questions there instead. It will then generate the project and give you a zip file to download. If you choose this option, just unzip the file and open it in Visual Studio Code.

Whichever approach you take, you should end up with a project open in Code that looks a lot like this:

I like to trim out a few things that we don’t really need. I tend to delete the “.mvn” directory, the “mvnw” and “mvnw.cmd” files and the “HELP.md” file. Now is also a great time to create a git repository for this code. I like to add/commit all of these remaining files and keep that as my starting point.

Explore the generated code

Here’s the Maven POM (pom.xml) that was generated:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>3.0.0</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.redstack</groupId>
	<artifactId>customer</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>customer</name>
	<description>Demo project for Spring Boot</description>
	<properties>
		<java.version>17</java.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-jpa</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>

		<dependency>
			<groupId>com.oracle.database.jdbc</groupId>
			<artifactId>ojdbc8</artifactId>
			<scope>runtime</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

There’s a few things to note here. The parent is the standard spring-boot-starter-parent and this will bring in a bunch of useful defaults for us. The dependencies list contains the items we chose in the Spring Initializr (as expected) and finally, note the build section has the spring-boot-maven-plugin included. This will let us build and run the Spring Boot application easily from maven (with “mvn spring-boot:run“).

Let’s add one more dependency:

<dependency>
	<groupId>org.projectlombok</groupId>
	<artifactId>lombok</artifactId>
	<version>1.18.26</version>
</dependency>

Lombok offers various annotations aimed at replacing Java code that is well known for being boilerplate, repetitive, or tedious to write. We’ll use it to avoid writing getters, setters, constructors and builders.

And here is the main CustomerApplication Java class file:

package com.redstack.customer;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class CustomerApplication {

	public static void main(String[] args) {
		SpringApplication.run(CustomerApplication.class, args);
	}

}

Nothing much to see here. Notice it has the SpringBootApplciation annotation.

Define the Customer Entity

Let’s go ahead and define our data model now. Since we are using JPA, we define our data model using a POJO. Create a Customer.java file in src/main/java/com/redstack/customer with this content:

package com.redstack.customer;

import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;

import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.GenerationType;
import jakarta.persistence.Id;
import jakarta.persistence.SequenceGenerator;

@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
@Entity
public class Customer {

    @Id
    @SequenceGenerator(
            name = "customer_id_sequence",
            sequenceName = "customer_id_sequence"
    )
    @GeneratedValue(
            strategy = GenerationType.SEQUENCE,
            generator = "customer_id_sequence"
    )
    private Integer id;
    private String firstName;
    private String lastName;
    private String email;
}

Starting from the bottom, we see the definition of the four fields that we wanted for our Customer entity – ID, first and last names, and email address.

The id field has some annotations on it. First it has @Id which identifies it as the key. Then we have a @SequenceGenerator annotation, which tells JPA that we want to create a “sequence” in the database and gives it a name. A sequence is a database object from which multiple users may generate unique integers. The last annotation, @GeneratedValue tells JPA that this field should be populated from that sequence.

The class also has some annotations on it. It has the JPA @Entity annotation which tells JPA that this is an entity that we want to store in the database. The other annotations are Lombok annotations to save us writing a bunch of boilerplate code. @Data generates getters for all fields, a useful toString method, and hashCode and equals implementations that check all non-transient fields. It will also generate setters for all non-final fields, as well as a constructor. @Builder generates some nice APIs to create instances of our object – we’ll see how we use it later on. And @AllArgsConstructor and @NoArgsConstructor generate pretty much what their names suggest they do.

Set up the Spring Boot Application Properties

Ok, next let’s set up the JPA configuration in the Spring Boot Application Properties. You will find a file called application.properties in src/main/resources. This file can be in either the “properties” format, or in YAML. I personally prefer to use YAML, so I renamed that file to application.yaml and here is the content:

server:
  port: 8080

spring:
  application:
    name: customer
  datasource:
    username: 'customer'
    url: jdbc:oracle:thin:@//172.17.0.2:1521/pdb1
    password: 'Welcome123'
    driver-class-name: oracle.jdbc.driver.OracleDriver
  jpa:
    properties:
      hibernate:
        dialect: org.hibernate.dialect.OracleDialect
        format-sql: 'true'
    hibernate:
      ddl-auto: update
    show-sql: 'true'

Let’s look at what we have here. First we set the port to 8080, and the application’s name to “customer”. If you prefer to use the properties format these first two setting would like like this:

server.port=8080
spring.application.name=customer

After that we set up the data source. You can provide the JDBC URL for your Oracle Database, and the username and password and the JBDC driver class, as shown. Note that the use will need to actually exist. You can create the user in the database by running these statements as an admin user:

create user customer identified by Welcome123;
grant connect, resource to customer;
alter user customer quota unlimited on users;
commit;

The final section of config we see here is the JPA configuration where we need to declare which “dialect” we are using – this identifies what kind of SQL should be generated, in our case Oracle. The format-sql and show-sql settings are jsut there to make the SQL statements we see in logs easier for us to read.

The ddl-auto setting is interesting. Here’s a good article that explains the possible values and what they do. We’ve used update in this example, which “instructs Hibernate to update the database schema by comparing the existing schema with the entity mappings and generate the appropriate schema migration scripts.” That’s a resonable choice for this scenario, but you shoudl be aware that there are probably better choices in some cases. For example, if you are actively developing the entity and making changes to it, create-drop might be better for you. And if the database objects already exist and you just want to use them, then none might be the best choice – we’ll talk more about this in a future post!

Create the JPA Repository Class

Next, let’s create the JPA Repository class which we can use to save, retrieve and delete entities in/from the database. Create a file called CustomerRepository.java in src/main/java/com/redstack/customer with this content:

package com.redstack.customer;

import org.springframework.data.jpa.repository.JpaRepository;

public interface CustomerRepository extends JpaRepository<Customer, Integer> {
}

Ok, that takes care of our JPA work. Now, let’s get started on our services.

Create the Customer Service

Let’s start with a service to register (create) a new customer. We can start by defining the input data that we expect. Let’s create a CustomerRegistrationRequest.java in the same directory with this content:

package com.redstack.customer;

public record CustomerRegistrationRequest(
    String firstName,
    String lastName,
    String email) {
}

Notice that we did not include the ID, because we are going to get that from the database sequence. So we just need the client/caller to give us the remaining three fields.

Next, we can create our controller. Create a new file called CustomerController.java in the same directory with this content:

package com.redstack.customer;

import org.springframework.http.ResponseEntity;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("api/v1/customers")
public record CustomerController(CustomerService service) {

    @PostMapping
    @ResponseBody
    public ResponseEntity<String> registerCustomer(@RequestBody CustomerRegistrationRequest req) {
        service.registerCustomer(req);
        return ResponseEntity.status(HttpStatus.CREATED).body("Customer registered successfully.\n");
    }
}

So here we used a Java record to define the controller, and we ask Spring to inject the CustomerService for us. Obviously, we have not created that yet, we’ll get to that in a minute! The reocrd has two annotations – @RestController tells spring to expose a REST API for this record, and @RequestMapping lets us set up the URL path for this controller. Since we set the port to 8080 earlier, and assuming we just run this on our development machine for now, this REST API will have a URL of http://localhost:8080/api/v1/customers.

Next we can define the handlers. Here we have just the first one, to handle HTTP POST requests. We will add others later. Our registerCustomer method will be exposed as the handler for POST requests, because we gavt it the @PostMapping annotation, and it will be able to return an HTTP response with a status code and body becauase we gave it the @RepsonseBody annotation. This method accepts the CustomerRegistrationRequest that we defined earlier. Notice that we add the @RequestBody annotation to that method argument. This tells Spring that the data will be provided by the caller as JSON in the HTTP Request Body (as opposed to being in a query or header, etc.) And this handler simply calls the registerCustomer method in the service and passes through the data.

So, its time to write the service! Create a new file called CusotmerService.java in the same directory with this content:

package com.redstack.customer;

import org.springframework.stereotype.Service;

@Service
public record CustomerService(CustomerRepository repository) {

    public void registerCustomer(CustomerRegistrationRequest req) {
        Customer customer = Customer.builder()
                .firstName(req.firstName())
                .lastName(req.lastName())
                .email(req.email())
                .build();
        repository.saveAndFlush(customer);
    }
}

Again, we are using a Java record for the service. Records are immutable data classes that require only the type and name of fields. The equalshashCode, and toString methods, as well as the private, final fields and public constructor, are generated by the Java compiler. You can also include static variables and methods in records. I’m using them here to save a bunch of boilerplate code that I do not want to write.

We put the @Service annotation on the record to tell Spring that this is a service. In the record arguments, we have Spring inject an instance of our CustomerRepository which we will need to talk to the database.

For now, we just need one method in our service, registerCustomer(). We’ll add more later. This method also accepts the CustomerRegistrationRequest and the first thing we do with it is create a new Customer entity object. Notice that we are using the builder that we auto-generated with Lombok – we never wrote any code to create this builder! Yay! Then, all we need to do is use our JPA repository’s saveAndFlush() method to save that customer in the database. saveAndFlush will do an INSERT and then a COMMIT in the database.

Time to test the application!

Let’s start up our service and test it! Before we start, you might want to connect to your database and satisfy yourself that there is no CUSTOMER table there:

sql customer/Welcome123@//172.17.0.2:1521/pdb1
SQL> select table_name from user_tables;

no rows selected

To run the service, run this Maven command:

mvn spring-boot:run

This will compile the code and then run the service. You will see a bunch of log messages appear. In around the middle you should see something like this:


2023-02-03T11:15:37.827-05:00  INFO 8488 --- [           main] SQL dialect                              : HHH000400: Using dialect: org.hibernate.dialect.OracleDialect
Hibernate: create global temporary table HTE_customer(id number(10,0), email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), rn_ number(10,0) not null, primary key (rn_)) on commit delete rows
Hibernate: create table customer (id number(10,0) not null, email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), primary key (id))

There’s the SQL that it ran to create the CUSTOMER table for us! If you’d like to, you can check in the database with this statement:

SQL> describe customer;

Name          Null?       Type
_____________ ___________ _____________________
ID            NOT NULL    NUMBER(10)
EMAIL                     VARCHAR2(255 CHAR)
FIRST_NAME                VARCHAR2(255 CHAR)
LAST_NAME                 VARCHAR2(255 CHAR)

You can also take a look at the sequence if you would like to:

SQL> select sequence_name, min_value, increment_by, last_number from user_sequences;

SEQUENCE_NAME              MIN_VALUE   INCREMENT_BY    LAST_NUMBER
_______________________ ____________ _______________ ______________
CUSTOMER_ID_SEQEUNCE               1              50           1001

Now, let’s invoke the service to test it! We can invoke the service using cURL, we need to do a POST, set the Content-Type header and provide the data in JSON format:

$ curl -i \
   -X POST \
   -H 'Content-Type: application/json' \
   -d '{"firstName": "Mark", "lastName": "Nelson", "email": "mark@some.com"}' \
    http://localhost:8080/api/v1/customers
HTTP/1.1 201
Content-Type: text/plain;charset=UTF-8
Content-Length: 34
Date: Fri, 03 Feb 2023 17:41:39 GMT

Customer registered successfully.

The “-i” tells cURL to pring out the response. You can see that we got a HTTP 201 (created), i.e., success!

Now we see the new record in the database, as expected:

SQL> select * from customer ;

   ID EMAIL            FIRST_NAME    LAST_NAME
_____ ________________ _____________ ____________
    1 mark@some.com    Mark          Nelson

Great, that is working the way we wanted, so we can create customers and have them stored in the database. Now let’s add some endpoints to query customers from the database.

Add a “get all customers” endpoint

The first endpoint we want to add will allow us to get a list of all customers. To do this, let’s add this new method to our controller:

// add these imports
import java.util.List;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.ResponseStatus;

// ...

    @GetMapping(produces = {MediaType.APPLICATION_JSON_VALUE})
    @ResponseBody
    @ResponseStatus(HttpStatus.OK)
    public List<Customer> getAllCustomers() {
        return service.getAllCustomers();
    }

Here we have a getAllCustomers() method that simply calls the corresponding method in the service (we’ll write that in a moment) and returns the results. Of course, we have some annotations too. The @GetMapping tells Spring Boot that this method will be exposed as an HTTP GET method handler. The produces defines the output body’s Content-Type, in this case it will be “application/json“. The @ResponseStatus sets the HTTP status code.

Here’s the method we need ot add to our CustomerService, notice it just uses a built-in method on the repository to get the data, its very simple:

// add this import
import java.util.List;

// ...

    public List<Customer> getAlCustomers() {
        return repository.findAll();
    }

With those changes in place, we can restart the service and call this new GET endpoint like this:

$ curl -i http://localhost:8080/api/v1/customers
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 03 Feb 2023 17:55:17 GMT

[{"id":1,"firstName":"Mark","lastName":"Nelson","email":"mark@some.com"}]

You might like to do a few more POSTs and another GET to observe what happens.

Add a “get customer by ID” endpoint

Let’s add the final endpoint that we wanted in our service. We want to be able to get a specific customer using the ID. Here’s the code to add to the controller:

// add these imports
import java.util.Optional;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;

// ...

    @GetMapping(path="/{id}", produces = {MediaType.APPLICATION_JSON_VALUE})
    @ResponseBody
    public ResponseEntity<Customer> getCustomer(@PathVariable Integer id) {
        Optional<Customer> c = service.getCustomer(id);
        if (c.isPresent()) {
            return ResponseEntity.status(HttpStatus.OK).body(c.get());
        } else {
            return ResponseEntity.status(HttpStatus.NOT_FOUND).body(null);
        }
    }

Here we see some differences to the previous endpoint implementation. This one is a little more sophisticated. First, we have added a path to the @GetMapping annotation to add a positional parameter to the end of the path, so this endpoint will be /api/v1/customers/{id}. In the method arguments we have a @PathVariable annotation to grab that {id} from the path and use it as an argument to our method.

Also, notice that the method returns ResponseEntity<Customer>. This gives us some more control over the response, and allows us to set different HTTP status codes (and if we wanted to we could also control the headers, body, etc.) based on our own business logic.

Inside this method we call our service’s (soon to be written) getCustomer(id) method which returns an Optional<Customer>. Then we check if the Optional actually contains a Customer, indicating that a customer entity/record was found for the specified id, and if so we return it along with an HTTP 200 (OK). If the Optional is empty, then return an HTTP 404 (not found).

Here’s the new method to add to the service:

// add this import
import java.util.Optional;

// ...

    public Optional<Customer> getCustomer(Integer id) {
        return repository.findById(id);
    }

This one is fairly sinple, we are just calling a standard built-in method on the JPA Repository class to get the data.

Now we can restart the application, and test the new endpoint by asking for customers that we know exist, and do not exist to observe the different outcomes:

$ curl -i http://localhost:8080/api/v1/customers/1
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 03 Feb 2023 18:15:30 GMT

{"id":1,"firstName":"Mark","lastName":"Nelson","email":"mark@some.com"}

$ curl -i http://localhost:8080/api/v1/customers/5
HTTP/1.1 404
Content-Length: 0
Date: Fri, 03 Feb 2023 18:15:37 GMT

Notice the HTTP status codes are different in each case. Also, notice that the JSON returned when a customer is found is just one JSON object {…} not a list [{…}, … ,{…}] as in the get all customers endpoint.

Conclusion

Well there you have it, we have completed our simple customer microservice built using Spring Boot and Oracle Database. I hope you followed along and built it too, and enjoyed learing a bit about Spring Boot and Oracle! Stay tuned for more posts on this topic, each covering a little more advanced toopic than the last. See you soon!

Posted in Uncategorized | Tagged , , , | Leave a comment

Two new Backend as a Service offerings live now!

Hi everyone!

For the last few months I have been working on two projects which have just gone live with their first “Developer Preview” releases.

If you’d like to check them out and see what I have been up to, have a look at:

It’s been a lot of fun working on these, and I am really happy to be able to tell you about them at last!

The Oracle Mobile Backend as a Service offering is built on top of Parse and lets you easily build mobile and web apps using any of the Parse SDKs and have all your data stored in an Oracle Autonomous Database in JSON collections. It also includes the Parse Dashboard for managing your applicaiton data. Its easy to install from OCI Marketplace and once the install is done, you can start hitting those APIs and building your apps right away!

The Oracle Backend as a Serivce for Spring Cloud lets you easily install a comprehensive runtime environment for Spring Boot applications including a Kubernetes (OKE) cluster, Spring Config Server (with the config data in Oracle Autonomous Database), Spring Eureka Service Registry, APISIX API Gateway and Dashboard, Netflix Conductor, Spring Admin Dashboard, Prometheus, Grafana, Jaeger and Open Telemetry. You can build apps using Spring Data with JPA or JDBC access to the Oracle Autonomous Database. We have included a sample custom Spring component for using Oracle Transactional Event Queueing. There’s a CLI to manage deploying apps into the environment, managing configuration and database schema for services. We also included a set of sample applications that demonstrate how to use the platform – these include service discovery, fault tolerance, distributed tracing and so on.

As “Developer Preview” implies – there’s much more to come in this space!

I am planning to write more blog posts really soon to demonstrate how to use both of these offerings. I hope you’ll check them out!

Posted in Uncategorized | Tagged , , , , | 1 Comment