Playing with Kafka Java Client for TEQ – creating the simplest of producers and consumers

Today I was playing with Kafka Java Client for TEQ, that allows you to use Oracle Transactional Event Queues (formerly known as Sharded Queues) in the Oracle Database just like Kafka.

Kafka Java Client for TEQ is available as a preview in GitHub here: http://github.com/oracle/okafka

In this preview version, there are some limitations documented in the repository, but the main one to be aware of is that you need to use the okafka library, not the regular kafka one, so you would need to change existing kafka client code if you wanted to try out the preview.

Preparing the database

To get started, I grabbed a new Oracle Autonomous Database instance on Oracle Cloud, and I opened up the SQL Worksheet in Database Actions and created myself a user. As the ADMIN user, I ran the following commands:

create user mark identified by SomePassword;  -- that's not the real password!
grant connect, resource to mark;
grant create session to mark;
grant unlimited tablespace to mark;
grant execute on dbms_aqadm to mark;
grant execute on dbms_aqin to mark;
grant execute on dbms_aqjms to mark;
grant select_catalog_role to mark;
grant select on gv$instance to mark;
grant select on gv$listener_network to mark;
commit;

And of course, I needed a topic to work with, so I logged on to SQL Worksheet as my new MARK user and created a topic called topic1 with these commands:

begin
    sys.dbms_aqadm.create_sharded_queue(queue_name => 'topic1', multiple_consumers => TRUE); 
    sys.dbms_aqadm.set_queue_parameter('topic1', 'SHARD_NUM', 1);
    sys.dbms_aqadm.set_queue_parameter('topic1', 'STICKY_DEQUEUE', 1);
    sys.dbms_aqadm.set_queue_parameter('topic1', 'KEY_BASED_ENQUEUE', 1);
    sys.dbms_aqadm.start_queue('topic1');
end;

Note that this is for Oracle Database 19c. If you are using 21c, create_sharded_queue is renamed to create_transactional_event_queue, so you will have to update that line.

The topic is empty right now, since we just created it, but here are a couple of queries that will be useful later. We can see the messages in the topic, with details including the enqueue time, status, etc., using this query:

select * from topic1;

This is a useful query to see a count of messages in each status:

select msg_state, count(*)
from aq$topic1
group by msg_state;

Building the OKafka library

We need to build the OKafka library and install it in our local Maven repository so that it will be available to use as a dependency since the preview is not currently available in Maven Central.

First, clone the repository:

git clone https://github.com/oracle/okafka

Now we can build the uberjar with the included Gradle wrapper:

cd okafka
./gradlew fullJar

This will put the JAR file in gradle/build/libs and we can install this into our local Maven repository using this command:

mvn install:install-file \
    -DgroupId=org.oracle.okafka \
    -DartifactId=okafka \
    -Dversion=0.8 \
    -Dfile=clients/build/libs/okafka-0.8-full.jar \
    -DpomFile=clients/okafka-0.8.pom 

Now we are ready to start writing our code!

Creating the Producer

Let’s start by creating our Maven POM file. In a new directory, called okafka, I created a file called pom.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
         https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>okafka</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>okafka</name>

	<properties>
		<java.version>17</java.version>
		<maven.compiler.source>17</maven.compiler.source>
		<maven.compiler.target>17</maven.compiler.target>
		<okafka.version>0.8</okafka.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.oracle.okafka</groupId>
			<artifactId>okafka</artifactId>
			<version>${okafka.version}</version>
		</dependency>
	</dependencies>
</project>

I am using Java 17 for this example. But you could use anything from 1.8 onwards, just update the version in the properties if you are using an earlier version.

Now let’s create our producer class:

mkdir -p src/main/java/com/example/okafka
touch src/main/java/com/example/okafka/Producer.java

Here’s the content for Producer.java:

package com.example.okafka;

import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.Properties;

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.oracle.okafka.clients.producer.ProducerRecord;

public class Producer {

    private static final String propertiesFilename = "producer.properties";

    public static void main(String[] args) {
        // configure logging level
        System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "INFO");

        // load props
        Properties props = getProperties();

        String topicName = props.getProperty("topic.name", "TOPIC1");

        try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
            for (int i = 0; i < 100; i++) {
                producer.send(new ProducerRecord<String, String>(
                    topicName, 0, "key", "value " + i));
            }
            System.out.println("sent 100 messages");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    private static Properties getProperties() {
        Properties props = new Properties();

        try (
                InputStream inputStream = Producer.class
                    .getClassLoader()
                    .getResourceAsStream(propertiesFilename);
        ) {
            if (inputStream != null) {
                props.load(inputStream);
            } else {
                throw new FileNotFoundException(
                     "could not find properties file: " + propertiesFilename);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return props;
    }
}

Let’s walk through this code and talk about what it does.

First, let’s notice the imports. We are importing the OKafka versions of the familiar Kafka classes. These have the same interfaces as the standard Kafka ones, but they work with Oracle TEQ instead:

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.oracle.okafka.clients.producer.ProducerRecord;

In the main() method we first set the log level and then we load some properties from our producer.properties config file. You will see the getProperties() method at the end of the file is a fairly standard, it is just reading the file and returning the contents as a new Properties object.

Let’s see what’s in that producer.properties file, which is located in the src/main/resources directory:

oracle.service.name=xxxxx_prod_high.adb.oraclecloud.com
oracle.instance.name=prod_high
oracle.net.tns_admin=/home/mark/src/okafka/wallet
security.protocol=SSL
tns.alias=prod_high

bootstrap.servers=adb.us-ashburn-1.oraclecloud.com:1522
batch.size=200
linger.ms=100
buffer.memory=326760
key.serializer=org.oracle.okafka.common.serialization.StringSerializer
value.serializer=org.oracle.okafka.common.serialization.StringSerializer
topic.name=TOPIC1

There are two groups of properties in there. The first group provide details about my Oracle Autonomous Database instance, including the location of the wallet file – we’ll get that and set it up in a moment.

The second group are the normal Kafka properties that you might expect to see, assuming you are familiar with Kafka. Notice that the bootstrap.servers lists the address of my Oracle Autonomous Database, not a Kafka broker! Also notice that we are using the serializers (and later, deserializers) provided in the OKafka library, not the standard Kafka ones.

Next, we set the topic name by reading it from the properties file. If it is not there, the second argument provides a default/fallback value:

String topicName = props.getProperty("topic.name", "TOPIC1");

And now we are ready to create the producer and send some messages:

try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
    for (int i = 0; i < 100; i++) {
        producer.send(new ProducerRecord<String, String>(
            topicName, 0, "key", "value " + i));
    }
    System.out.println("sent 100 messages");
} catch (Exception e) {
    e.printStackTrace();
}

We created the KafkaProducer and for this example, we are using String for both the key and the value.

We have a loop to send 100 messages, which we create with the ProducerRecord class. We are just setting them to some placeholder data.

Ok, that’s all we need in the code. But we will need to get the wallet and set it up so Java programs can use it to authenticate. Have a look at this post for details on how to do that! You just need to download the wallet from the OCI console, unzip it into a directory called wallet – put that in the same directory as the pom.xml, and then edit the sqlnet.ora to set the DIRECTORY to the right location, e.g. /home/mark/src/okafka/wallet for me, and then add your credentials using the setup_wallet.sh I showed in that post.

Finally, you need to add these lines to the ojdbc.properties file in the wallet directory to tell OKafka the user to connect to the database with:

user=mark
password=SomePassword
oracle.net.ssl_server_dn_match=true

With the wallet set up, we are ready to build and run our code!

mvn clean package
CLASSPATH=target/okafka-0.0.1-SNAPSHOT.jar
CLASSPATH=$CLASSPATH:$HOME/.m2/repository/org/oracle/okafka/okafka/0.8/okafka-0.8.jar
java -classpath $CLASSAPTH com.example.okafka.Producer

The output should look like this:

[main] INFO org.oracle.okafka.clients.producer.ProducerConfig - ProducerConfig values: 
        acks = 1
        batch.size = 200
        bootstrap.servers = [adb.us-ashburn-1.oraclecloud.com:1522]
        buffer.memory = 326760
        client.id = 
        compression.type = none
        connections.max.idle.ms = 540000
        enable.idempotence = false
        interceptor.classes = []
        key.serializer = class org.oracle.okafka.common.serialization.StringSerializer
        linger.ms = 100
        max.block.ms = 60000
        max.in.flight.requests.per.connection = 5
        max.request.size = 1048576
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        oracle.instance.name = prod_high
        oracle.net.tns_admin = /home/mark/src/okafka/wallet
        oracle.service.name = xxxxx_prod_high.adb.oraclecloud.com
        partitioner.class = class org.oracle.okafka.clients.producer.internals.DefaultPartitioner
        receive.buffer.bytes = 32768
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retries = 0
        retry.backoff.ms = 100
        security.protocol = SSL
        send.buffer.bytes = 131072
        tns.alias = prod_high
        transaction.timeout.ms = 60000
        transactional.id = null
        value.serializer = class org.oracle.okafka.common.serialization.StringSerializer

[main] WARN org.oracle.okafka.common.utils.AppInfoParser - Error while loading kafka-version.properties :inStream parameter is null
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka version : unknown
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka commitId : unknown
[kafka-producer-network-thread | producer-1] INFO org.oracle.okafka.clients.Metadata - Cluster ID: 
sent 100 messages
[main] INFO org.oracle.okafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.

You can see it dumps out the properties, and then after some informational messages you see the “sent 100 messages” output. Now you might want to go and run that query to look at the messages in the database!

Now, lets move on to creating a consumer, so we can read those messages back.

Creating the Consumer

The consumer is going to look very similar to the producer, and it will also have its own properties file. Here’s the contents of the properties file first – put this in src/main/resources/consumer.properties:

oracle.service.name=xxxxx_prod_high.adb.oraclecloud.com
oracle.instance.name=prod_high
oracle.net.tns_admin=/home/mark/src/testing-okafka/okafka/wallet
security.protocol=SSL
tns.alias=prod_high

bootstrap.servers=adb.us-ashburn-1.oraclecloud.com:1522
group.id=bob
enable.auto.commit=true
auto.commit.interval.ms=10000
key.deserializer=org.oracle.okafka.common.serialization.StringDeserializer
value.deserializer=org.oracle.okafka.common.serialization.StringDeserializer
max.poll.records=100

And here is the content for Consumer.java which you create in src/main/java/com/example/okafka:

package com.example.okafka;

import java.io.FileNotFoundException;
import java.io.InputStream;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;

import org.oracle.okafka.clients.consumer.ConsumerRecord;
import org.oracle.okafka.clients.consumer.ConsumerRecords;
import org.oracle.okafka.clients.consumer.KafkaConsumer;

public class Consumer {
    private static final String propertiesFilename = "consumer.properties";

    public static void main(String[] args) {
        // logging level
        System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "INFO");


        // load props
        Properties props = getProperties();

        String topicName = props.getProperty("topic.name", "TOPIC1");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList(topicName));

        ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(30_000));
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(
                record.topic() + " " + 
                record.partition() + " " + 
                record.key() + " " + 
                record.value());
        }
        consumer.close();
    }

    private static Properties getProperties() {
        Properties props = new Properties();

        try (
                InputStream inputStream = Producer.class
                    .getClassLoader()
                    .getResourceAsStream(propertiesFilename);
        ) {
            if (inputStream != null) {
                props.load(inputStream);
            } else {
                throw new FileNotFoundException(
                    "could not find properties file: " + propertiesFilename);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return props;
    }
}

A lot of this is the same as the producer, so let’s walk through the parts that are different.

First, we load a the different properties file, the consumer one, it has a few different properties that are relevant for consumers. In particular, we are setting the max.poll.records to 100 – so we’ll only be reading at most 100 messages off the topic at a time.

Here’s how we create the consumer:

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList(topicName));

Again, you may notice that this is very similar to Kafka. We are using String as the type for both the key and value. Notice we provided the appropriate deserializers in the property file, the ones from the OKafka library, not the standard Kafka ones.

Here’s the actual consumer code:

        ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(30_000));
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(
                record.topic() + " " + 
                record.partition() + " " + 
                record.key() + " " + 
                record.value());
        }
        consumer.close();

We open our consumer and poll for messages (for 30 seconds) and then we just print out some information about each message, and then close out consumer! Again, this is very simple, but its enough to test consuming messages.

We can run this and we should see all of the message data in the output, here’s how to run it, and an excerpt of the output:

mvn clean package
CLASSPATH=target/okafka-0.0.1-SNAPSHOT.jar
CLASSPATH=$CLASSPATH:$HOME/.m2/repository/org/oracle/okafka/okafka/0.8/okafka-0.8.jar
java -classpath $CLASSAPTH com.example.okafka.Consumer

[main] INFO org.oracle.okafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
        auto.commit.interval.ms = 10000
        auto.offset.reset = latest
        bootstrap.servers = [adb.us-ashburn-1.oraclecloud.com:1522]
        check.crcs = true
        client.id = 
        connections.max.idle.ms = 540000
        default.api.timeout.ms = 60000
        enable.auto.commit = true
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = bob
        heartbeat.interval.ms = 3000
        interceptor.classes = []
        internal.leave.group.on.close = true
        isolation.level = read_uncommitted
        key.deserializer = class org.oracle.okafka.common.serialization.StringDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 100
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        oracle.instance.name = prod_high
        oracle.net.tns_admin = /home/mark/src/okafka/wallet
        oracle.service.name = xxxxx_prod_high.adb.oraclecloud.com
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retry.backoff.ms = 100
        security.protocol = SSL
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        tns.alias = prod_high
        value.deserializer = class org.oracle.okafka.common.serialization.StringDeserializer

[main] WARN org.oracle.okafka.common.utils.AppInfoParser - Error while loading kafka-version.properties :inStream parameter is null
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka version : unknown
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka commitId : unknown
TOPIC1 0 key value 0
TOPIC1 0 key value 1
TOPIC1 0 key value 2
...

So there you go! We successfully created a very simple producer and consumer and we sent and received messages from a topic using the OKafka library and Oracle Transactional Event Queues!

Posted in Uncategorized | Tagged , , | 1 Comment

Loading data into Autonomous Data Warehouse using Datapump

Today I needed to load some data in my Oracle Autonomous Database running on Oracle Cloud (OCI). I found this great article that explained just what I needed!

Thanks to Ankur Saini for sharing!

Posted in Uncategorized | Leave a comment

Configuring a Java application to connect to Autonomous Database using Mutual TLS

In this post, I am going to explain how to configure a standalone Java (SE) application to connect to an Oracle Autonomous Database instance running in Oracle Cloud using Mutual TLS.

The first thing you are going to need is an Oracle Autonomous Database instance. If you are reading this post, you probably already know how to get one. But just in case you don’t – here’s a good reference to get you started – and remember, this is available in the “always free” tier, so you can try this out for free!

When you look at your instance in the Oracle Cloud (OCI) console, you will see there is a button labelled DB Connection – go ahead and click on that:

Viewing the Autonomous Database instance in the Oracle Cloud Console.

In the slide out details page, there is a button labelled Download wallet – click on that and save the file somewhere convenient.

Downloading the wallet.

When you unzip the wallet file, you will see it contains a number of files, as shown below, including a tnsnames.ora and sqlnet.ora to tell your client how to access the database server, as well as some wallet files that contain certificates to authenticate to the database:

$ ls
Wallet_MYQUICKSTART.zip

$ unzip Wallet_MYQUICKSTART.zip
Archive:  Wallet_MYQUICKSTART.zip
  inflating: README
  inflating: cwallet.sso
  inflating: tnsnames.ora
  inflating: truststore.jks
  inflating: ojdbc.properties
  inflating: sqlnet.ora
  inflating: ewallet.p12
  inflating: keystore.jks

The first thing you need to do is edit the sqlnet.ora file and make sure the DIRECTORY entry matches the location where you unzipped the wallet, and then add the SSL_SERVER_DN_MATCH=yes option to the file, it should look something like this:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/home/mark/blog")))
SSL_SERVER_DN_MATCH=yes

Before we set up Mutual TLS – let’s review how we can use this wallet as-is to connect to the database using a username and password. Let’s take a look at a simple Java application that we can use to validate connectivity – you can grab the source code from GitHub:

$ git clone https://github.com/markxnelson/adb-mtls-sample

This repository contains a very simple, single class Java application that just connects to the database, checks that the connection was successful and then exits. It includes a Maven POM file to get the dependencies and to run the application.

Make sure you can compile the application successfully:

$ cd adb-mtls-sample
$ mvn clean compile

Before you run the sample, you will need to edit the Java class file to set the database JDBC URL and user to match your own environment. Notice these lines in the file src/main/java/com/github/markxnelson/SimpleJDBCTest.java:

// set the database JDBC URL - note that the alias ("myquickstart_high" in this example) and 
// the location of the wallet must be changed to match your own environment
private static String url = "jdbc:oracle:thin:@myquickstart_high?TNS_ADMIN=/home/mark/blog";
    
// the username to connect to the database with
private static String username = "admin";

You need to update these with the correct alias name for your database (it is defined in the tnsnames.ora file in the wallet you downloaded) and the location of the wallet, i.e. the directory where you unzipped the wallet, the same directory where the tnsnames.ora is located.

You also need to set the correct username that the sample should use to connect to your database. Note that the user must exist and have at least the connect privilege in the database.

Once you have made these updates, you can compile and run the sample. Note that this code expects you to provide the password for that use in an environment variable called DB_PASSWORD:

$ export DB_PASSWORD=whatever_it_is
$ mvn clean compile exec:exec

You will see the output from Maven, and toward the end, something like this:

[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ adb-mtls-sample ---
Trying to connect...
Connected!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

Great! We can connect to the database normally, using a username and password. If you want to be sure, try commenting out the two lines that set the user and password on the data source and run this again – the connection will fail and you will get an error!

Now let’s configure it to use mutual TLS instead.

I included a script called setup_wallet.sh in the sample repository. If you prefer, you can just run that script and provide the username and passwords when asked. If you want to do it manually, then read on!

First, we need to configure the Java class path to include the Oracle Wallet JAR files. Maven will have downloaded these from Maven Central for you when you compiled the application above, so you can find them in your local Maven repository:

  • $HOME/.m2/repository/com/oracle/database/security/oraclepki/19.3.0.0/oraclepki-19.3.0.0.jar
  • $HOME/.m2/repository/com/oracle/database/security/osdt_core/19.3.0.0/osdt_core-19.3.0.0.jar
  • $HOME/.m2/repository/com/oracle/database/security/osdt_cert/19.3.0.0/osdt_cert-19.3.0.0.jar

You’ll need these for the command we run below – you can put them into an environment variable called CLASSPATH for easy access.

export CLASSPATH=$HOME/.m2/repository/com/oracle/database/security/oraclepki/19.3.0.0/oraclepki-19.3.0.0.jar
export CLASSPATH=$CLASSPATH:$HOME/.m2/repository/com/oracle/database/security/osdt_core/19.3.0.0/osdt_core-19.3.0.0.jar
export CLASSPATH=$CLASSPATH:$HOME/.m2/repository/com/oracle/database/security/osdt_cert/19.3.0.0/osdt_cert-19.3.0.0.jar

Here’s the command you will need to run to add your credentials to the wallet (don’t run it yet!):

java \
    -Doracle.pki.debug=true \
    -classpath ${CLASSPATH} \
    oracle.security.pki.OracleSecretStoreTextUI \
    -nologo \
    -wrl "$USER_DEFINED_WALLET" \
    -createCredential "myquickstart_high" \
    $USER >/dev/null <<EOF
$DB_PASSWORD
$DB_PASSWORD
$WALLET_PASSWORD
EOF

First, set the environment variable USER_DEFINED_WALLET to the directory where you unzipped the wallet, i.e. the directory where the tnsnames.ora is located.

export USER_DEFINED_WALLET=/home/mark/blog

You’ll also want the change the alias in this command to match your database alias. In the example above it is myquickstart_high. You get this value from your tnsnames.ora – its the same one you used in the Java code earlier.

Now we are ready to run the command. This will update the wallet to add your user’s credentials and associate them with that database alias.

Once we have done that, we can edit the Java source code to comment out (or remove) the two lines that set the user and password:

//ds.setUser(username);
//ds.setPassword(password);

Now you can compile and run the program again, and this time it will get the credentials from the wallet and will use mutual TLS to connect to the database.

$ mvn clean compile exec:exec
... (lines omitted) ...
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ adb-mtls-sample ---
Trying to connect...
Connected!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

There you have it! We can now use this wallet to allow Java applications to connect to our database securely. This example we used was pretty simple, but you could imagine perhaps putting this wallet into a Kubernetes secret and mounting that secret as a volume for a pod running a Java microservice. This provides separation of the code from the credentials and certificates needed to connect to and validate the database, and helps us to build more secure microservices. Enjoy!

Posted in Uncategorized | Tagged , , | 1 Comment

Can Java microservices be as fast as Go?

I recently did a talk with Peter Nagy where we compared Java and Go microservices performance. We published a write up in the Helidon blog over at Medium.

Posted in Uncategorized | Leave a comment

Storing ATP Wallets in a Kubernetes Secret

In this previous post, we talked about how to create a WebLogic datasource for an ATP database. In that example we put the ATP wallet into the domain directly, which is fine if your domain is on a secure environment, but if we want to use ATP from a WebLogic domain running in Kubernetes, you might not want to burn the wallet into the Docker image. Doing so would enable anyone with access to the Docker image to retrieve the wallet.

A more reasonable thing to do in the Kubernetes environment would be to put the ATP wallet into a Kubernetes secret and mount that secret into the container.

You will, of course need to decide where you are going to mount it and update the sqlnet.ora with the right path, like we did in the previous post. Once that is taken care of, you can create the secret from the wallet using a small script like this:

#!/bin/bash
# Copyright 2019, Oracle Corporation and/or its affiliates. All rights reserved.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: atp-secret
  namespace: default
type: Opaque
data:
  ojdbc.properties: `cat ojdbc.properties | base64 -w0`
  tnsnames.ora: `cat tnsnames.ora | base64 -w0`
  sqlnet.ora: `cat sqlnet.ora | base64 -w0`
  cwallet.sso: `cat cwallet.sso | base64 -w0`
  ewallet.p12: `cat ewallet.p12 | base64 -w0`
  keystore.jks: `cat keystore.jks | base64 -w0`
  truststore.jks: `cat truststore.jks | base64 -w0`
EOF

We need to base64 encode the data that we put into the secret. When you mount the secret on a container (in a pod), Kubernetes will decode it, so it appears to the container in its original form.

Here is an example of how to mount the secret in a container:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-weblogic-server
  labels:
    app: my-weblogic-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-weblogic-server
  template:
    metadata:
      labels:
        app: my-weblogic-server
    spec:
      containers:
      - name: my-weblogic-server
        image: my-weblogic-server:1.2
        volumeMounts:
        - mountPath: /shared
          name: atp-secret
          readOnly: true
      volumes:
       - name: atp-secret
         secret:
           defaultMode: 420
           secretName: atp-secret

You will obviously still need to control access to the secret and the running containers, but overall this approach does help to provide a better security stance.

Posted in Uncategorized | Leave a comment

Configuring a WebLogic Data Source to use ATP

In this post I am going to share details about how to configure a WebLogic data source to use ATP.

If you are not familiar with ATP, it is the new Autonomous Transaction Processing service on Oracle Cloud. It provides a fully managed autonomous database. You can create a new database in the OCI console in the Database menu under “Autonomous Transaction Processing” by clicking on that big blue button:

You need to give it a name, choose the number of cores and set an admin password:

It will take a few minutes to provision the database. Once it is ready, click on the database to view details.

Then click on the “DB Connection” button to download the wallet that we will need to connect to the database.

You need to provide a password for the wallet, and then you can download it:

Copy the wallet to your WebLogic server and unzip it. You will see the following files:

[oracle@domain1-admin-server atp]$ ls -l
total 40
-rw-rw-r--. 1 oracle oracle 6661 Feb  4 17:40 cwallet.sso
-rw-rw-r--. 1 oracle oracle 6616 Feb  4 17:40 ewallet.p12
-rw-rw-r--. 1 oracle oracle 3241 Feb  4 17:40 keystore.jks
-rw-rw-r--. 1 oracle oracle   87 Feb  4 17:40 ojdbc.properties
-rw-rw-r--. 1 oracle oracle  114 Feb  4 17:40 sqlnet.ora
-rw-rw-r--. 1 oracle oracle 6409 Feb  4 17:40 tnsnames.ora
-rw-rw-r--. 1 oracle oracle 3336 Feb  4 17:40 truststore.jks

I put these in a directory called /shared/atp. You need to update the sqlnet.ora to have the correct location as shown below:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/shared/atp")))
SSL_SERVER_DN_MATCH=yes

You will need to grab the hostname, port and service name from the tnsnames.ora to create the data source, here is an example:

productiondb_high = (description= (address=(protocol=tcps)(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))(connect_data=(service_name=feqamosccwtl3ac_productiondb_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn=
        "CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US"))   )

You can now log in to the WebLogic console and create a data source, give it a name on the first page:

You can take the defaults on the second page:

And the third:

On the next page, you need set the database name, hostname and port to the values from the tnsnames.ora:

On the next page you can provide the username and password. In this example I am just using the admin user. In a real life scenario you would probably go and create a “normal” user and use that. You can find details about how to set up SQLPLUS here.

You also need to set up a set of properties that are required for ATP as shown below, you can find more details in the ATP documentation:

oracle.net.tns_admin=/shared/atp
oracle.net.ssl_version=1.2
javax.net.ssl.trustStore=/shared/atp/truststore.jks
oracle.net.ssl_server_dn_match=true
user=admin
javax.net.ssl.keyStoreType=JKS
javax.net.ssl.trustStoreType=JKS
javax.net.ssl.keyStore=/shared/atp/keystore.jks
javax.net.ssl.keyStorePassword=WebLogicCafe1
javax.net.ssl.trustStorePassword=WebLogicCafe1
oracle.jdbc.fanEnabled=false

Also notice the the URL format is jdbc:oracle:thin:@cafedatabase_high, you just need to put the name in there from the tnsnames.ora file:

On the next page you can target the data source to the appropriate servers, and we are done! Click on the “Finish” button and then you can activate changes if you are in production mode.

You can now go and test the data source (in the “Monitoring” tab and then “Testing”, select the data source and click on the “Test Data Source” button.

You will see the success message:

Enjoy!

Posted in Uncategorized | 1 Comment

New Steps Store launched in Wercker!

Wercker’s new Steps Store just went live and you can read all about it here:

http://blog.wercker.com/steps-launch-of-new-steps-store

In case you don’t know – Wercker is Oracle’s cloud-based (SaaS) CI/CD platform, which you can use for free at http://www.wercker.com.  Steps are reusable parts that can be used in continuous delivery pipelines.  They are almost all open source and free to use too.  We also have a non-free tier which we call “Oracle Container Pipelines” which gives you dedicated resources to run your pipelines.

Posted in Uncategorized | Tagged , , | Leave a comment

Oracle releases the open source Oracle WebLogic Server Kubernetes Operator

I am very happy to be able to announce that we have just released and open sourced the Oracle WebLogic Server Kubernetes Operator, which I have been working on with a great team of people for the last few months!

You can find the official announcement on the WebLogic Server blog and the code is on GitHub at https://github.com/oracle/weblogic-kubernetes-operator.  This initial release is a “Technology Preview” which we really hope people will be interested in playing with and giving feedback.  We have already had some great feedback from our small group of testers who have been playing with it for the last couple of weeks, and we are very, very appreciative for their input.  We have some great plans for the operator going forward.

 

 

Posted in Uncategorized | Leave a comment

Oracle releases certification for WebLogic Server on Kubernetes

In case you missed it, Oracle has certified WebLogic Server on Kubernetes.  You can read all the details here:

https://blogs.oracle.com/weblogicserver/weblogic-server-certification-on-kubernetes

Posted in Uncategorized | Leave a comment

Java EE is moving to the Eclipse Foundation

I’m sure many of you have already heard the news, but in case you missed it, you might want to read all about it here!

Posted in Uncategorized | Leave a comment