August OpenNTF Webinar - XPages Jakarta EE Support In Practice

Aug 16, 2022, 8:16 AM

This Thursday (two days from now), I'll be presenting for OpenNTF's webinar series on the topic of the XPages Jakarta EE Support project. From our summary:

The XPages Jakarta EE Support project on OpenNTF adds an array of modern capabilities to NSF-based Java development. These improvements can be used for wholly-new applications or added incrementally to existing ones.

In this webinar Jesse Gallagher will demonstrate how to use this project to perform common tasks in better ways, such as creating and consuming REST services, writing managed beans with CDI, and using new EL features in XPages. Though these examples will largely use Java, they do not require any knowledge of OSGi or extension library development, nor any tools other than Designer.

This webinar will take place on August 18, 2022 at 11:00AM (New York Time) to 12:30PM.

Register for this webinar at: https://register.gotowebinar.com/register/6878765070462193675

My intent for this is to show the most-common components used with some examples of how I'm using them in practice. I hope it will also be an opportunity for anyone who (reasonably) balks at the opaque monolith to ask questions and get a better idea for whether it'd be helpful for them.

Adding Transactions to the XPages Jakarta EE Support Project

Jul 20, 2022, 4:03 PM

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. Adding Concurrency to the XPages Jakarta EE Support Project
  11. Adding Transactions to the XPages Jakarta EE Support Project
  12. XPages Jakarta EE 2.9.0 and Next Steps

As my work of going down the list of JEE specs is hitting dwindling returns, I decided to give a shot to implementing the Jakarta Transactions spec.

Implementation Oddities

This one's a little spicy for a couple of reasons, one of which is that it's really a codified implementation of another spec, the X/Open XA standard, which is an old standard for transaction processing. As is often the case, "old" here also means "fiddly", but fortunately it's not too bad for this need.

Another reason is that, unlike with a lot of the specs I've implemented, all of the existing implementations seem a bit too heavyweight for me to adapt. I may look around again later: I could have missed one, and eventually GlassFish's implementation may spin off. In the mean time, I wrote a from-scratch implementation for the XPages JEE project: it doesn't cover everything in the spec, and in particular doesn't support suspend/resume for transactions, but it'll work for normal cases in an NSF.

The Spec In Use

Fortunately, while implementations are generally complex (as befits the problem space), the actual spec is delightfully simple for users. There are two main things for an app developer to know about: the jakarta.transaction.UserTransaction interface and the @jakarta.transaction.Transactional annotation. These can be used separately or combined to add transactional behavior. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@Transactional
public void createExampleDocAndPerson() {
	Person person = new Person();
	/* do some business logic */
	person = personRepository.save(person);
	
	ExampleDoc exampleDoc = new ExampleDoc();
	/* do some business logic */
	exampleDoc = repository.save(exampleDoc);
}

The @Transactional annotation here means that the intent is that everything in this method either completes or none of it does. If something to do with ExampleDoc fails, then the creation of the Person doc shouldn't take effect, even though code was already executed to create and save it.

You can also use UserTransaction directly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@Inject
private UserTransaction transaction;

public void createExampleDocAndPerson() throws Exception {
	transaction.begin();
	try {
		Person person = new Person();
		/* do some business logic */
		person = personRepository.save(person);
	
		ExampleDoc exampleDoc = new ExampleDoc();
		/* do some business logic */
		exampleDoc = repository.save(exampleDoc);

		transaction.commit();
	} catch(Exception e) {
		transaction.rollback();
	}
}

There's no realistic way to implement this in a general way, so the way the Transactions API takes shape is that specific resources - databases, namely - can opt in to transaction processing. While this won't necessarily save you from, say, sending out an errant email, it can make sure that related records are only ever updated together.

Domino Implementation

This support is fairly common among SQL databases and other established resources, and, fortunately for us, Domino's transaction support became official in 12.0.

So I modified the JNoSQL driver to hook into transactions when appropriate. When it's able to find an open transaction, it begins a native Domino transaction and then registers itself as a javax.transaction.xa.XAResource (a standard Java SE interface) to either commit or roll back that DB transaction as appropriate.

From what I can tell, Domino's transaction support is a bit limited compared to what the spec can do. Apparently, Domino's support is based around thread-specific stacks - while this has the nice attribute of supporting nested transactions, it doesn't look to support suspending transactions or propagating them across threads.

Fortunately, the normal case will likely not need those more-arcane capabilities. Considering that few if any Domino applications currently make use of transactions at all, this should be a significant boost.

Next Steps

As I mentioned above, I'll likely take another look around for svelte implementations of the spec, since I'd be more than happy to cast my partial implementation to the wayside. While my version seems to check out in practice, I'd much sooner rely on a proven implementation from elsewhere.

Beyond that, any next steps may come in the form of supporting other databases via JPA. While JDBC drivers may hook into this as it is, I haven't tried that, and this could be a good impetus to adding JPA to the supported stack. That'll come post-2.7.0, though - this release has enough big-ticket items and should now be in a cool-off period before I call it solid and move to the next one.

Adding Concurrency to the XPages Jakarta EE Support Project

Jul 11, 2022, 1:37 PM

Tags: jakartaee
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. Adding Concurrency to the XPages Jakarta EE Support Project
  11. Adding Transactions to the XPages Jakarta EE Support Project
  12. XPages Jakarta EE 2.9.0 and Next Steps

For a little while, I've had a task open for me to investigate the Jakarta Concurrency and MP Context Propagation specs, and this weekend I decided to dive into that. While I've shelved the MicroProfile part for now, I was successful in implementing Concurrency, at least for the most part.

The Spec

The Jakarta Concurrency spec deals with extending Java's default multithreading services - Threads, ExecutorServices, and ScheduledExecutorServices - in a couple ways that make them more capable in Jakarta EE applications. The spec provides Managed variants of these executors, though they extend the base Java interfaces and can be treated the same way by user code.

While the extra methods here and there for task monitoring are nice, and I may work with them eventually, the big-ticket item for my needs is propagating context from the initializer to the thread. By "context" here I mean things like knowledge of the running NSF, its CDI environment, the user making the HTTP request, and so forth. As it shakes out, this is no small task, but the spec makes it workable.

Examples

In its basic form, an ExecutorService lets you submit a task and then either let it run on its own time or use get() to synchronously wait for its execution - sort of like async/await but less built-in. For example:

1
2
ExecutorService exec = /* get an ExecutorService */;
String basic = exec.submit(() -> "Hello from executor").get();

A ScheduledExecutorService extends this a bit to allow for future-scheduled and repeating tasks:

1
2
3
4
5
String[] val = new String[1];
ScheduledExecutorService exec = /* get a ScheduledExecutorService */;
exec.schedule(() -> { val[0] = "hello from scheduler"; }, 250, TimeUnit.MILLISECONDS);
Thread.sleep(300);
// Now val[0] is "hello from scheduler"

Those examples aren't exactly useful, but hopefully you can get some further ideas. With an ExecutorService, you can spin up multiple concurrent tasks and then wait for them all - in a client project, I do this to speed up large view reading by divvying it up into chunks, for example. Alternatively, you could accept an interactive request from a user and then kick off a thread to do hefty work while returning a response before it's done.

There's an example of this sort of thing in XPages up on OpenNTF from about a decade ago. It uses Eclipse Jobs as its concurrency tool of choice, but the idea is largely the same.

The Basic Implementation

For the core implementation code, I grabbed the GlassFish implementation, which was the Reference Implementation back when JEE had Reference Implementations. With that as the baseline, I was responsible for just a few tasks:

The devil was in the details, but the core lifecycle wasn't too bad.

JNDI

One intriguing and slightly vexing thing about this API is that the official way to access these executors is to use JNDI, the "Java Naming and Directory Interface", which is something of an old and weird spec. It's also one of the specs that remains in standard Java while being in the javax.* namespace, and those always feel weird nowadays.

Anyway, JNDI is used for a bunch of things (like ruining Thanksgiving), but one of them is to provide named objects from a container to an application - kind of like managed beans.

One common use for this is to allow an app container (such as Open Liberty) to manage a JDBC connection to a relational database, allowing the app to just reference it by name and not have to manage the driver and connection specifics. I use that for this blog, in fact.

XPages apps don't really do this, but Domino does include a com.ibm.pvc.jndi.provider.java OSGi bundle that handles JNDI basics. I'm sure there are some proper ways to go about registering services with this, but I couldn't be bothered: in practice, I just call context.rebind(...) and call it a day.

Ferrying the Context

The core workhorse of this is the ContextSetupProvider implementation. It's the part that's responsible for being notified when context is going to be shunted around and then doing the work of grabbing what's needed in a portable way and setting it up for threaded code. For my needs, I set up an extension interface that can be used to register different participants, so that the Concurrency bundle doesn't have to retain knowledge about everything.

So far, there are a few of these.

Notes Context

The first of these is the NSFNotesContextParticipant, which takes on the job of identifying the current Notes/XPages environment and preparing an equivalent one in the worker thread. The "Threads and Jobs" project above does something like this using the ThreadSessionExecutor class provided with the runtime, but that didn't really suit my needs.

What this class does is grab the current com.ibm.domino.xsp.module.nsf.NotesContext, pulls the NSFComponentModule and HttpServletRequest from it, and then uses that information to set up the new thread before tearing it down when the task is done.

This process initializes the thread like a NotesThread and also sets appropriate Session and Database objects in the context, so code running in an NSF can use those in their threaded tasks without having to think about the threading:

1
String userName = exec.submit(() -> "Username is: " + NotesContext.getCurrent().getCurrentSession().getEffectiveUserName()).get();

This class does some checking to make sure it's in an NSF-specific request. I may end up also writing an equivalent one for OSGi Servlet/Web Container requests as well.

CDI

Outside of Notes-runtime specifics, the most important context to retain is the CDI context. Fortunately, that one's not too difficult:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@Override
public void saveContext(ContextHandle contextHandle) {
	if(contextHandle instanceof AttributedContextHandle) {
		if(LibraryUtil.isLibraryActive(CDILibrary.LIBRARY_ID)) {
			((AttributedContextHandle)contextHandle).setAttribute(ATTR_CDI, CDI.current());
		}
	}
}

@Override
public void setup(ContextHandle contextHandle) throws IllegalStateException {
	if(contextHandle instanceof AttributedContextHandle) {
		CDI<Object> cdi = ((AttributedContextHandle)contextHandle).getAttribute(ATTR_CDI);
		ConcurrencyCDIContainerLocator.setCdi(cdi);
	}
}

@Override
public void reset(ContextHandle contextHandle) {
	if(contextHandle instanceof AttributedContextHandle) {
		ConcurrencyCDIContainerLocator.setCdi(null);
	}
}

This checks to make sure CDI is enabled for the current app and, if so, uses the standard CDI.current() method to find the container. This is the same method used everywhere and ends up falling to the NSFCDIProvider class to actually locate it. That bounces back to a locator service that returns the thread-set one, and thus the executor is able to find it. Then, threaded code is able to use CDI.current().select(...) to find beans from the application:

1
2
String databasePath = exec.submit(() -> "Database is: " + CDI.current().select(Database.class).get()).get();
String applicationGuy = exec.submit(() -> "applicationGuy is: " + CDI.current().select(ApplicationGuy.class).get().getMessage()).get();

Next Steps

To flesh this out, I have some other immediate work to do. For one, I'll want to see if I can ferry over the current JAX-RS application context - that will be needed for using the MicroProfile Rest Client, for example.

Beyond that, I'm considering implementing the MicroProfile Context Propagation spec, which provides some alternate capabilities to go along with this functionality. It may be a bit more work than it's worth for NSF use, but I like to check as many boxes as I can. Those concepts look to have made it into Concurrency 3.0, but, as that version is targeted for Jakarta EE 10, it's almost certain that final implementation builds will require Java 11.

Finally, though, and along similar lines, I'm pondering backporting the @Asynchronous annotation from Concurrency 3.0, which is a CDI extension that allows you to implicitly make a method asynchronous:

1
2
3
4
5
@Asynchronous
public CompletableFuture<Object> someExpensiveOperation() {
	Object result = /* do something expensive */;
	return Asynchronous.Result.complete(result);
}

With that, it's similar to submitting the task specifically, but CDI will do all the work of actually making it async when called. We'll see - I've avoided backporting much, but that one is tempting.

WebAuthn/Passkey Login With JavaSapi

Jul 5, 2022, 2:05 PM

Tags: java webauthn
  1. Poking Around With JavaSapi
  2. Per-NSF-Scoped JWT Authorization With JavaSapi
  3. WebAuthn/Passkey Login With JavaSapi

Over the weekend, I got a wild hair to try something that had been percolating in my mind recently: get Passkeys, Apple's term for WebAuthn/FIDO technologies, working with the new Safari against a Domino server. And, though some aspects were pretty non-obvious (it turns out there's a LOT of binary-data stuff in JavaScript now), I got it working thanks to a few tools:

  1. JavaSapi, the "yeah, I know it's not supported" DSAPI Java peer
  2. webauthn.guide, a handy resource
  3. java-webauthn-server, a Java implementation of the WebAuthn server components

Definition

First off: what is all this? All the terms - Passkey, FIDO, WebAuthn - for our purposes here deal with a mechanism for doing public/private key authentication with a remote server. In a sense, it's essentially a web version of what you do with Notes IDs or SSH keys, where the only actual user-provided authentication (a password, biometric input, or the like) happens locally, and then the local machine uses its private key combined with the server's knowledge of the public key to prove its identity.

WebAuthn accomplishes this with the navigator.credentials object in the browser, which handles dealing with the local security storage. You pass objects between this store and the server, first to register your local key and then subsequently to log in with it. There are a lot of details I'm fuzzing over here, in large part because I don't know the specifics myself, but that will cover it.

While browsers have technically had similar cryptographic authentication for a very long time by way of client SSL certificates, this set of technologies makes great strides across the board - enough to make it actually practical to deploy for normal use. With Safari, anything with Touch ID or Face ID can participate in this, while on other platforms and browsers you can use either a security key or similar cryptographic storage. Since I have a little MacBook Air with Touch ID, I went with that.

The Flow

Before getting too much into the specifics, I'll talk about the flow. The general idea with WebAuthn is that the user gets to a point where they're creating a new account (where password doesn't yet matter) or logged in via another mechanism, and then have their browser generate the keypair. In this case, I logged in using the normal Domino login form.

Once in, I have a button on the page that will request that the browser create the keypair. This first makes a request to the server for a challenge object appropriate for the user, then creates the keypair locally, and then POSTs the results back to the server for verification. To the user, that looks like:

WebAuthn key creation in Safari

That keypair will remain in the local device's storage - in this case, Keychain, synced via iCloud in upcoming versions.

Then, I have another button that performs a challenge. In practice, this challenge would be configured on the login page, but it's on the same page for this proof-of-concept. The button causes the browser to request from the server a list of acceptable public keys to go with a given username, and then prompts the user to verify that they want to use a matching one:

WebAuthn assertion in Safari

The implementation details of what to do on success and failure are kind of up to you. Here, I ended up storing active authentication sessions on a server in a Map keyed the SessionID cookie value to user name, but all options are open.

Dance Implementation

As I mentioned above, my main tool on the server side was java-webauthn-server, which handles a lot of the details of the processs. For this experiment, I cribbed their InMemoryRegistrationStorage class, but a real implementation would presumably store this information in an NSF.

For the client side, there's a npm module to handle the specifics, but I was doing this all on a single HTML page and so I just borrowed from it in pieces (particularly the Base64/ByteArray stuff).

On the server, I created an OSGi plugin that contained a com.ibm.pvc.webcontainer.application webapp as well as the JavaSapi service: since they're both in the same OSGi bundle, that meant I could share the same classes and memory space, without having to worry about coordination between two very-distinct parts (as would be the case with DSAPI).

The webapp part itself actually does more of the work, and does so by way of four Servlets: one for the "create keys" options, one for registering a created key, one for the "I want to start a login" pre-flight, and finally one for actually handling the final authentication. As an implementation note, getting this working involved removing guava.jar from the classpath temporarily, as the library in question made heavy use of a version slightly newer than what Domino ships with.

Key Creation

The first Servlet assumes that the user is logged in and then provides a JSON object to the client describing what sort of keypair it should create:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Session session = ContextInfo.getUserSession();
			
PublicKeyCredentialCreationOptions request = WebauthnManager.instance.getRelyingParty()
	.startRegistration(
		StartRegistrationOptions.builder()
			// Creates or retrieves an in-memory object with an associated random "handle" value
		    .user(WebauthnManager.instance.locateUser(session))
		    .build()
	);

// Store in the HTTP session for later verification. Could also be done via cookie or other pairing
req.getSession(true).setAttribute(WebauthnManager.REQUEST_KEY, request);
	
String json = request.toCredentialsCreateJson();
resp.setStatus(200);
resp.setHeader("Content-Type", "application/json"); //$NON-NLS-1$ //$NON-NLS-2$
resp.getOutputStream().write(String.valueOf(json).getBytes());

The client side retrieves these options, parses some Base64'd parts to binary arrays (this is what the npm module would do), and then sends that back to the server to create the registration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
fetch("/webauthn/creationOptions", { "credentials": "same-origin" })
	.then(res => res.json())
	.then(json => {
		json.publicKey.challenge = base64urlToBuffer(json.publicKey.challenge, c => c.charCodeAt(0));
		json.publicKey.user.id = base64urlToBuffer(json.publicKey.user.id, c => c.charCodeAt(0));
		if(json.publicKey.excludeCredentials) {
			for(var i = 0; i < json.publicKey.excludeCredentials.length; i++) {
				var cred = json.publicKey.excludeCredentials[i];
				cred.id = base64urlToBuffer(cred.id);
			}
		}
		navigator.credentials.create(json)
			.then(credential => {
				// Create a JSON-friendly payload to send to the server
				const payload = {
					type: credential.type,
					id: credential.id,
					response: {
						attestationObject: bufferToBase64url(credential.response.attestationObject),
						clientDataJSON: bufferToBase64url(credential.response.clientDataJSON)
					},
					clientExtensionResults: credential.getClientExtensionResults()
				}
				fetch("/webauthn/create", {
					method: "POST",
					body: JSON.stringify(payload)
				})
			})
	})

The code for the second call on the server parses out the POST'd JSON and stores the registration in the in-memory storage (which would properly be an NSF):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
String json = StreamUtil.readString(req.getReader());
PublicKeyCredential<AuthenticatorAttestationResponse, ClientRegistrationExtensionOutputs> pkc = PublicKeyCredential
	.parseRegistrationResponseJson(json);

// Retrieve the request we had stored in the session earlier
PublicKeyCredentialCreationOptions request = (PublicKeyCredentialCreationOptions) req.getSession(true)
	.getAttribute(WebauthnManager.REQUEST_KEY);

// Perform registration, which verifies that the incoming JSON matches the initiated request
RelyingParty rp = WebauthnManager.instance.getRelyingParty();
RegistrationResult result = rp
	.finishRegistration(FinishRegistrationOptions.builder().request(request).response(pkc).build());

// Gather the registration information to store in the server's credential repository
DominoRegistrationStorage repo = WebauthnManager.instance.getRepository();
CredentialRegistration reg = new CredentialRegistration();
reg.setAttestationMetadata(Optional.ofNullable(pkc.getResponse().getAttestation()));
reg.setUserIdentity(request.getUser());
reg.setRegistrationTime(Instant.now());
RegisteredCredential credential = RegisteredCredential.builder()
	.credentialId(result.getKeyId().getId())
	.userHandle(request.getUser().getId())
	.publicKeyCose(result.getPublicKeyCose())
	.signatureCount(result.getSignatureCount())
	.build();
reg.setCredential(credential);
reg.setTransports(pkc.getResponse().getTransports());

Session session = ContextInfo.getUserSession();
repo.addRegistrationByUsername(session.getEffectiveUserName(), reg);

Login/Assertion

Once the client has a keypair and the server knows about the public key, then the client can ask the server for what it would need if one were to log in as a given name, and then uses that information to make a second call. The dance on the client side looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
var un = /* fetch the username from somewhere, such as a login form */
fetch("/webauthn/assertionRequest?un=" + encodeURIComponent(un))
	.then(res => res.json())
	.then(json => {
		json.publicKey.challenge = base64urlToBuffer(json.publicKey.challenge, c => c.charCodeAt(0));
		if(json.publicKey.allowCredentials) {
			for(var i = 0; i < json.publicKey.allowCredentials.length; i++) {
				var cred = json.publicKey.allowCredentials[i];
				cred.id = base64urlToBuffer(cred.id);
			}
		}
		navigator.credentials.get(json)
			.then(credential => {
				const payload = {
					type: credential.type,
					id: credential.id,
					response: {
						authenticatorData: bufferToBase64url(credential.response.authenticatorData),
						clientDataJSON: bufferToBase64url(credential.response.clientDataJSON),
						signature: bufferToBase64url(credential.response.signature),
						userHandle: bufferToBase64url(credential.response.userHandle)
					},
					clientExtensionResults: credential.getClientExtensionResults()
				}
				fetch("/webauthn/assertion", {
					method: "POST",
					body: JSON.stringify(payload),
					credentials: "same-origin"
				})
			})
	})

That's pretty similar to the middle code block above, really, and contains the same sort of ferrying to and from transport-friendly JSON objects and native credential objects.

On the server side, the first Servlet - which looks up the available public keys for a user name - is comparatively simple:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
String userName = req.getParameter("un"); //$NON-NLS-1$
AssertionRequest request = WebauthnManager.instance.getRelyingParty()
	.startAssertion(
		StartAssertionOptions.builder()
			.username(userName)
			.build()
	);

// Stash the current assertion request
req.getSession(true).setAttribute(WebauthnManager.ASSERTION_REQUEST_KEY, request);

String json = request.toCredentialsGetJson();
resp.setStatus(200);
resp.setHeader("Content-Type", "application/json"); //$NON-NLS-1$ //$NON-NLS-2$
resp.getOutputStream().write(String.valueOf(json).getBytes());

The final Servlet handles parsing out the incoming assertion (login) and stashing it in memory as associated with the "SessionID" cookie. That value could be anything that the browser will send with its requests, but "SessionID" works here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
String json = StreamUtil.readString(req.getReader());
PublicKeyCredential<AuthenticatorAssertionResponse, ClientAssertionExtensionOutputs> pkc =
	    PublicKeyCredential.parseAssertionResponseJson(json);

// Retrieve the request we had stored in the session earlier
AssertionRequest request = (AssertionRequest) req.getSession(true).getAttribute(WebauthnManager.ASSERTION_REQUEST_KEY);

// Perform verification, which will ensure that the signed value matches the public key and challenge
RelyingParty rp = WebauthnManager.instance.getRelyingParty();
AssertionResult result = rp.finishAssertion(FinishAssertionOptions.builder()
	.request(request)
	.response(pkc)
	.build());

if(result.isSuccess()) {
	// Keep track of logins
	WebauthnManager.instance.getRepository().updateSignatureCount(result);
	
	// Find the session cookie, which they will have by now
	String sessionId = Arrays.stream(req.getCookies())
		.filter(c -> "SessionID".equalsIgnoreCase(c.getName()))
		.map(Cookie::getValue)
		.findFirst()
		.get();
	WebauthnManager.instance.registerAuthenticatedUser(sessionId, result.getUsername());
}

Trusting the Key

At this point, there's a dance between the client and server that results in the client being able to perform secure, password-less authentication with the server and the server knowing about the association, and so now the remaining job is just getting the server to actually trust this assertion. That's where JavaSapi comes in.

Above, I used the "SessionID" cookie as a mechanism to store an in-memory association between a browser cookie (which is independent of authentication) to a trusted user. Then, I made a JavaSapi service that looks for this and tries to find an authenticated user in its authenticate method:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@Override
public int authenticate(IJavaSapiHttpContextAdapter context) {
	Cookie[] cookies = context.getRequest().getCookies();
	if(cookies != null) {
		Optional<Cookie> cookie = Arrays.stream(context.getRequest().getCookies())
			.filter(c -> "SessionID".equalsIgnoreCase(c.getName())) //$NON-NLS-1$
			.findFirst();
		if(cookie.isPresent()) {
			String sessionId = cookie.get().getValue();
			Optional<String> user = WebauthnManager.instance.getAuthenticatedUser(sessionId);
			if(user.isPresent()) {
				context.getRequest().setAuthenticatedUserName(user.get(), "WebAuthn"); //$NON-NLS-1$
				return HTEXTENSION_REQUEST_AUTHENTICATED;
			}
		}
	}
	
	return HTEXTENSION_EVENT_DECLINED;
}

And that's all there is to it on the JavaSapi side. Because it shares the same active memory space as the webapp doing the dance, it can use the same WebauthnManager instance to read the in-memory association. You could in theory do this another way with DSAPI - storing the values in an NSF or some other mechanism that can be shared - but this is much, much simpler to do.

Conclusion

This was a neat little project, and it was a good way to learn a bit about some of the native browser objects and data types that I haven't had occasion to work with before. I think this is also something that should be in the product; if you agree, go vote for the ideas from a few years ago.

Rewriting The OpenNTF Site With Jakarta EE: UI

Jun 27, 2022, 3:06 PM

  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

In what may be the last in this series for a bit, I'll talk about the current approach I'm taking for the UI for the new OpenNTF web site. This post will also tread ground I've covered before, when talking about the Jakarta MVC framework and JSP, but it never hurts to reinforce the pertinent aspects.

MVC

The entrypoint for the UI is Jakarta MVC, which is a framework that sits on top of JAX-RS. Unlike JSF or XPages, it leaves most app-structure duties to other components. This is due both to its young age (JSF predates and often gave rise to several things we've discussed so far) and its intent. It's "action-based", where you define an endpoint that takes an incoming HTTP request and produces a response, and generally won't have any server-side UI state. This is as opposed to JSF/XPages, where the core concept is the page you're working with and the page state generally exists across multiple requests.

Your starting point with MVC is a JAX-RS REST service marked with @Controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package webapp.controller;

import java.text.MessageFormat;

import bean.EncoderBean;
import jakarta.inject.Inject;
import jakarta.mvc.Controller;
import jakarta.mvc.Models;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.home.Page;

@Path("/pages")
public class PagesController {
    
    @Inject
    Models models;
    
    @Inject
    Page.Repository pageRepository;
    
    @Inject
    EncoderBean encoderBean;

    @Path("{pageId}")
    @GET
    @Produces(MediaType.TEXT_HTML)
    @Controller
    public String get(@PathParam("pageId") String pageId) {
        String key = encoderBean.cleanPageId(pageId);
        Page page = pageRepository.findBySubject(key)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Unable to find page for ID: {0}", key)));
        models.put("page", page); //$NON-NLS-1$
        return "page.jsp"; //$NON-NLS-1$
    }
}

In the NSF, this will respond to requests like /foo.nsf/xsp/app/pages/Some_Page_Name. Most of what is going on here is the same sort of thing we saw with normal REST services: the @Path, @GET, @Produces, and @PathParam are all normal JAX-RS, while @Inject uses the same CDI scaffolding I talked about in the last post.

MVC adds two things here: @Inject Models models and @Controller.

The Models object is conceptually a Map that houses variables that you can populate to be accessible via EL on the rendered page. You can think of this like viewScope or requestScope in XPages and is populated in something like the beforePageLoad phase. Here, I use the Models object to store the Page object I look up with JNoSQL.

The @Controller annotation marks a method or a class as participating in the MVC lifecycle. When placed on a class, it applies to all methods on the class, while placing it on a method specifically allows you to mix MVC and "normal" REST resources in the same class. Doing that would be useful if you want to, for example, provide HTML responses to browsers and JSON responses to API clients at the same resource URL.

When a resource method is marked for MVC use, it can return a string that represents either a page to render or a redirection in the form "redirect:some/resource". Here, it's hard-coded to use "page.jsp", but in another situation it could programmatically switch between different pages based on the content of the request or state of the app.

While this looks fairly clean on its own, it's important to bear in mind both the strengths and weaknesses of this approach. I think it will work here, as it does for my blog, because the OpenNTF site isn't heavy on interactive forms. When dealing with forms in MVC, you'll have to have another endpoint to listen for @POST (or other verbs with a shim), process that request from scratch, and return a new page. For example, from the XPages JEE example app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Path("create")
@POST
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Controller
public String createPerson(
        @FormParam("firstName") @NotEmpty String firstName,
        @FormParam("lastName") String lastName,
        @FormParam("birthday") String birthday,
        @FormParam("favoriteTime") String favoriteTime,
        @FormParam("added") String added,
        @FormParam("customProperty") String customProperty
) {
    Person person = new Person();
    composePerson(person, firstName, lastName, birthday, favoriteTime, added, customProperty);
    
    personRepository.save(person);
    return "redirect:nosql/list";
}

That's already fiddlier than the XPages version, where you'd bind fields right to bean/document properties, and it gets potentially more complicated from there. In general, the more form-based your app is, the better a fit XPages/JSF is.

JSP

While MVC isn't intrinsically tied to JSP (it ships with several view engine hooks and you can write your own), JSP has the advantage of being built in to all Java webapp servers and is very well fit to purpose. When writing JSPs for MVC, the default location is to put them in WEB-INF/views, which is beneath WebContent in an NSF project:

Screenshot of JSPs in an NSF

The "tags" there are the general equivalent of XPages Custom Controls, and their presence in WEB-INF/tags is convention. An example page (the one used above) will tend to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<%@page contentType="text/html" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" session="false" %>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<t:layout>
    <turbo-frame id="page-content-${page.linkId}">
        <div>
            ${page.html}
        </div>
        
        <c:if test="${not empty page.childPageIds}">
            <div class="tab-container">
                <c:forEach items="${page.cleanChildPageIds}" var="pageId" varStatus="pageLoop">
                    <input type="radio" id="tab${pageLoop.index}" name="tab-group" ${pageLoop.index == 0 ? 'checked="checked"' : ''} />
                    <label for="tab${pageLoop.index}">${fn:escapeXml(encoder.cleanPageId(pageId))}</label>
                </c:forEach>
                    
                <div class="tabs">
                    <c:forEach items="${page.cleanChildPageIds}" var="pageId">
                        <turbo-frame id="page-content-${pageId}" src="xsp/app/pages/${encoder.urlEncode(pageId)}" class="tab" loading="lazy">
                        </turbo-frame>
                    </c:forEach>
                </div>
            </div>
        </c:if>
    </turbo-frame>
</t:layout>

There are, by shared lineage and concept, a lot of similarities with an XPage here. The first four lines of preamble boilerplate are pretty similar to the kind of stuff you'd see in an <xp:view/> element to set up your namespaces and page options. The tag prefixing is the same idea, where <t:layout/> refers to the "layout" custom tag in the NSF and <c:forEach/> refers to a core control tag that ships with the standard tag library, JSTL. The <turbo-frame/> business isn't JSP - I'll deal with that later.

The bits of EL here - all wrapped in ${...} - are from Expression Language 4.0, which is the current version of XPages's aging EL. On this page, the expressions are able to resolve variables that we explicitly put in the Models object, such as page, as well as CDI beans with the @Named annotation, such as encoderBean. There are also a number of implicit objects like request, but they're not used here.

In general, this is safely thought of as an XPage where you make everything load-time-bound and set viewState="nostate". The same sorts of concepts are all there, but there's no concept of a persistent component that you interact with. Any links, buttons, and scripts will all go to the server as a fresh request, not modifying an existing page. You can work with application and session scopes, but there's no "view" scope.

Hotwired Turbo

Though this app doesn't have much need for a lot of XPages's capabilities, I do like a few components even for a mostly "read-only" app. In particular, the <xe:djContentPane/> and <xe:djTabContainer/> controls have the delightful capability of deferring evaluation of their contents to later requests. This is a powerful way to speed up initial page load and, in the case of the tab container, skip needing to render parts of the page the user never uses.

For this and a couple other uses, I'm a fan of Hotwired Turbo, which is a library that grew out of 37 Signals's Rails-based development. The goal of Turbo and the other Hotwired components is to keep the benefits of server-based HTML rendering while mixing in a lot of the niceties of JS-run apps. There are two things that Turbo is doing so far in this app.

The first capability is dubbed "Turbo Drive", and it's sort of a freebie: you enable it for your app, tell it what is considered the app's base URL, and then it will turn any in-app links into "partial refresh" links: it downloads the page in the background and replaces just the changed part on the page. Though this is technically doing more work than a normal browser navigation, it ends up being faster for the user interface. And, since it also updates the URL to match the destination page and doesn't require manual modification of links, it's a drop-in upgrade that will also degrade gracefully if JavaScript isn't enabled.

The second capability is <turbo-frame/> up there, and it takes a bit more buy-in to the JS framework in your app design. The way I'm using Turbo Frames here is to support the page structure of OpenNTF, which is geared around a "primary" page as well as zero or more referenced pages that show up in tabs. Here, I'm buying in to Turbo Frames by surrounding the whole page in a <turbo-frame/> element with an id using the page's key, and then I reference each "sub-page" in a tab with that same ID. When loading the frame, Turbo makes a call to the src page, finds the element with the matching id value, and drops it in place inside the main document. The loading="lazy" parameter means that it defers loading until the frame is visible in the browser, which is handy when using the HTML/CSS-based tabs I have here.

I've been using this library for a while now, and I've been quite pleased. Though it was created for use with Rails, the design is independent of the server implementation, and the idioms fit perfectly with this sort of Java app too.

Conclusion

I think that wraps it up for now. As things progress, I may have more to add to this series, but my hope is that the app doesn't have to get much more complicated than the sort of stuff seen in this series. There are certainly big parts to tackle (like creating and managing projects), but I plan to do that by composing these elements. I remain delighted with this mode of NSF-based app development, and look forward to writing more clean, semi-declarative code in this vein.

Rewriting The OpenNTF Site With Jakarta EE: Beans

Jun 24, 2022, 5:03 PM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

Now that I've covered the basics of REST services and data access in the new OpenNTF web site, I'll dive a bit into the use of CDI for beans. The two previous topics implied some of the deeper work of CDI, with the @Inject annotation being used by CDI to supply bean and proxy values, but in those cases it was fine to just assume what it was doing.

CDI itself - Contexts and Dependency Injection - contains more capabilities than I'll cover here. Some of them, like its event/observer system, are things that I'll probably end up using in this app, but haven't made their way in yet. For now, I'll talk about the basic "managed beans" level and then build to the way Jakarta NoSQL uses its proxy-bean capabilities.

Managed Beans

In the OpenNTF site, I use a couple beans, some to provide scoped state and some to provide "services" for the app. I'll start with one of the simpler ones, a bean used to convert Markdown to HTML using CommonMark. I use a more-complicated version of this bean in my blog, but for now the OpenNTF one is small:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
package bean;

import org.commonmark.node.Node;
import org.commonmark.parser.Parser;
import org.commonmark.renderer.html.HtmlRenderer;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Named;

@ApplicationScoped
@Named("markdown")
public class MarkdownBean {
    private Parser markdown = Parser.builder().build();
    private HtmlRenderer markdownHtml = HtmlRenderer.builder()
            .build();

    public String toHtml(final String text) {
        Node parsed = markdown.parse(text);
        return markdownHtml.render(parsed);
    }
}

The core concepts here are exactly the same as you have with XPages Managed Beans. The "bean" itself is just a Java object and doesn't need to have any particular special characteristics other than, if it's stored in a serialized context, being Serializable or otherwise storable. The only difference here for that purpose is that, rather than being configured in faces-config.xml, the bean attributes are defined inline (there's a "beans.xml" for explicit definitions, but it's not needed in common cases). Here, the @ApplicationScoped annotation will cover its scope and the @Named annotation will allow it to be addressable by name in contexts like JSP or XPages. A CDI bean doesn't have to be named, but it's common in cases where the bean will be used in the UI.

Once a bean is defined, the most common way to use it is to use the @Inject annotation on another CDI-capable class, such as another bean or a JAX-RS resource. For example, it could be injected into a controller class like:

1
2
3
4
5
6
7
8
@Path("/blog")
@Controller
public class BlogController {
    @Inject
    private MarkdownBean markdown;

    // (snip)
}

CDI will handle the dirty business of making sure the field is populated, and that all scopes are respected. You can also retrieve a bean programmatically, with just a bit of gangliness:

1
MarkdownBean markdown = CDI.current().select(MarkdownBean.class).get();

You can think of that one as roughly equivalent to ExtLibUtil.resolveVariable(...).

By default, CDI comes with a few main scopes for our normal use: @ApplicationScoped, @SessionScoped, @RequestScoped, and @ConversationScoped. The last one is a bit weird: it kind of covers whatever your framework considers a "conversation". It's kind of like the view scope in XPages, and in the XPages JEE support project I mapped it to that, but it could also potentially be a conversation between distinct pages in an app. JSF, for its part, has its own @ViewScoped annotation, and I'm considering stealing or reproducing that.

That touches on the last bit I'll mention for this "basic" section of CDI: scope definitions. Though CDI comes with a handful of standard scopes, they're defined in a way that users can use. You could, for example, make a @InvoicingScope to cover beans that exist for the duration of a billing process, and then you'd managed initiating and terminating the scope yourself. Usually, this isn't necessary or particularly useful, but it's good to know it's there.

Producer Methods

The next level of this is the ability of a bean to programmatically produce beans for downstream use. By this I mean that a bean's method can be annotated with @Produces, and then it can provide a type to be matched elsewhere. In the OpenNTF app, I use this as a way to delay loading of a resource bundle until it's actually used:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
package bean;

import java.util.ResourceBundle;

import jakarta.enterprise.context.RequestScoped;
import jakarta.enterprise.inject.Produces;
import jakarta.inject.Inject;
import jakarta.inject.Named;
import jakarta.servlet.http.HttpServletRequest;

@RequestScoped
public class TranslationBean {
    @Inject
    HttpServletRequest request;

    @Produces @Named("translation")
    public ResourceBundle getTranslation() {
        return ResourceBundle.getBundle("translation", request.getLocale()); //$NON-NLS-1$
    }
}

Here, TranslationBean itself exists as a request-scoped bean and can be used programmatically, but it's really a shell for delayed retrieval of a ResourceBundle named "translation" for use in the UI. This allows me to use the built-in mapping behavior of ResourceBundle in Expression Language when writing bits of JSP like <p>${translation.copyright}</p>.

You can get more complicated than this, for sure. For example, if I switch the UI of this app to XPages, I may do a replacement of my classic controller framework that uses such a producer bean instead of the ViewHandler I used in the original implementation.

Proxy Beans

Finally, I'll talk a bit about dynamically-created proxy beans.

CDI's implementations make heavy use of object proxies to do their work. Technically, injected objects are proxies themselves, which allows CDI to let you do stuff like inject a @RequestScoped bean into an @ApplicationScoped one. But the weird part of CDI I plan to talk about here is the use of proxies to provide an object for an interface that doesn't have any implementation class.

I've mentioned this sort of injection a few times:

1
2
3
4
5
6
@Path("/pages")
public class PagesController {
    @Inject
    Page.Repository pageRepository;

    // snip

And then the interface is just:

1
2
3
4
@RepositoryProvider("homeRepository")
public interface Repository extends DominoRepository<Page, String> {
    Optional<Page> findBySubject(String subject);
}

There's no class that implements Page.Repository, so how come you can call methods on it? That's where the proxying comes in. While the CDI container (in this case, our NSF-based app) is being initialized, the Domino JNoSQL driver looks for classes implementing DominoRepository:

1
2
3
4
5
6
7
8
9
<T extends DominoRepository> void onProcessAnnotatedType(@Observes final ProcessAnnotatedType<T> repo) {
    Class<T> javaClass = repo.getAnnotatedType().getJavaClass();
    if (DominoRepository.class.equals(javaClass)) {
        return;
    }
    if (DominoRepository.class.isAssignableFrom(javaClass) && Modifier.isInterface(javaClass.getModifiers())) {
        crudTypes.add(javaClass);
    }
}

Then, once they're all found, it registers a special kind of bean for them:

1
2
3
void onAfterBeanDiscovery(@Observes final AfterBeanDiscovery afterBeanDiscovery, final BeanManager beanManager) {
    crudTypes.forEach(type -> afterBeanDiscovery.addBean(new DominoRepositoryBean(type, beanManager)));
}

I mentioned above that beans are generally just normal Java classes, but you can also make beans by implementing jakarta.enterprise.inject.spi.Bean, which gives you programmatic control over many aspects of the bean, including providing the actual implementation of them. In the Domino driver's case, as in most/all of the JNoSQL drivers, this is done by providing a proxy object:

1
2
3
4
5
6
7
public DominoRepository<?, ?> create(CreationalContext<DominoRepository<?, ?>> creationalContext) {
    DominoTemplate template = /* Instance of a DominoTemplate, which handles CRUD operations */;
    Repository<Object, Object> repository = /* JNoSQL's default Repository */;

    DominoDocumentRepositoryProxy<DominoRepository<?, ?>> handler = new DominoDocumentRepositoryProxy<>(template, this.type, repository);
    return (DominoRepository<?, ?>) Proxy.newProxyInstance(type.getClassLoader(), new Class[] { type }, handler);
}

Finally, that proxy class implements java.lang.reflect.InvocationHandler, which lets it provide custom handling of incoming methods.

This well goes deep, including the way JNoSQL will parse out method names and parameters to handle queries, but I think that will suffice for now. The important thing to know is that this is possible to do, common in underlying frameworks, and fairly rare in application code.

Next Up

I'm winding down on major topics, but at least critical one remains: the actual UI. Currently (and likely when shipping), the app uses MVC and JSP to cover this need. I've discussed these before, but I think it'll be useful to do so again, both as a refresher and to show how they bring these other parts of the app together.

Rewriting The OpenNTF Site With Jakarta EE: Data Access

Jun 21, 2022, 10:12 AM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

In my last post, I talked about how I make use of Jakarta REST to handle the REST services in the new OpenNTF site I'm working on. There'll be more to talk about on that front when I get to the UI and my use of MVC. For now, though, I'll dive a bit into how I'm accessing NSF data.

I've been talking a lot lately about how I've been fleshing out the Jakarta NoSQL driver for Domino that comes as part of the XPages JEE project, and specifically how writing this app has proven to be an ideal impetus for adding specific capabilities that are needed for working with Domino. This demonstrates some of the fruit of that labor.

Model Objects

There are a few ways to interact with Jakarta NoSQL, and they vary a bit by database type (key/value, column, document, graph), but I focus on using the Repository interface capability, which is a high-level abstraction over the pool of documents.

Before I get to that, though, I'll start with an entity object. Part of the heavy lifting that a framework like Jakarta NoSQL does is to map between a Java class and the actual data representation. In the SQL world, one would likely come across the term object-relational mapping for this, and the concept is generally the same. The project currently has a handful of such classes, and so the data layer looks like this:

Screenshot of Designer showing the data-related classes in the NSF

The mechanism for mapping a class in JNoSQL is very similar to JPA:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@Entity("Release")
public class ProjectRelease {
    
    public enum ReleaseStatus {
        Yes, No
    }
    
    @Id
    private String documentId;
    @Column("ProjectName")
    private String projectName;
    @Column("ReleaseNumber")
    private String version;
    @Column("ReleaseDate")
    private Temporal releaseDate;
    @Column("WhatsNewAbstract")
    private String description;
    @Column("DownloadsRelease")
    private int downloadCount;
    @Column("MainID")
    private String mainId;
    @Column("ReleaseInCatalog")
    private ReleaseStatus releaseStatus;
    @Column("DocAuthors")
    private List<String> docAuthors;
    @Column(DominoConstants.FIELD_ATTACHMENTS)
    private List<EntityAttachment> attachments;

    /* getters/setters and utility methods here */
}

@Entity("Release") at the top there declares that this class is a JNoSQL entity, and then the Domino driver uses "Release" as the form name when creating documents and performing queries.

The @Id and @Column("...") annotations map Java object properties to fields and attributes on the document. @Id populates the field with the document's UNID, while @Column does a named field. There's a special one there - @Column(DominoConstants.FIELD_ATTACHMENTS) - that will populate the field with references to the document's attachments when present. In each of these cases, all of the heavy lifting is done by the driver: there's no code in the app that manually accesses documents or views.

Repositories

The way I get access to documents mapped by these classes is to use the JNoSQL Repository mechanism, by way of the extended DominoRepository interface. They look like this (used here as an inner class for stylistic reasons, not technical ones):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
@Entity("Release")
public class ProjectRelease {

    @RepositoryProvider("projectsRepository")
    public interface Repository extends DominoRepository<ProjectRelease, String> {
        Stream<ProjectRelease> findByProjectName(String projectName, Sorts sorts);

        @ViewEntries("ReleasesByDate")
        Stream<ProjectRelease> findRecent(Pagination pagination);
        
        @ViewDocuments("IP Management\\Pending Releases")
        Stream<ProjectRelease> findPendingReleases();
    }

    /* snip: entity class from above */
}

Merely by creating this interface, I'm able to get access to the associated documents: I don't actually have to implement it myself. As seen in the last post, these interfaces can be injected into a bean or REST resource using CDI:

1
2
3
4
5
6
7
public class IPProjectsResource {
    
    @Inject
    private ProjectRelease.Repository projectReleases;

    /* snip */
}

Naturally, there is implementation code for this repository, but it's all done with what amounts to "Java magic": proxy objects and CDI. That's a huge topic on its own, and it's pretty weird to realize that that's even possible, but it will have to suffice for now to say that it is possible and it works great.

When you create one of these repositories, you get basic CRUD capabilities "for free": you can create new documents, look up existing documents by ID, and modify or delete existing documents.

Basic Queries

Beyond that, JNoSQL will do some lifting for you to give sensical implementations for methods based on their method signature in the absence of any driver-specific code. I'm making use of that here with findByProjectName(String projectName, Sorts sorts). The proxy object that provides this implementation is able to glean that String projectName refers to the projectName field of the ProjectRelease class, which is then mapped by annotation to the ProjectName item on the back end. The Sorts object is a JNoSQL type that allows you to specify one or more sort columns and their orders. When executed, this is translated to a DQL query like:

1
Form = 'ProjectRelease' and ProjectName = 'Some Project'

When Sorts are specified, this is also run through QueryResultsProcessor to create a QRP view with the given sort columns in a local temp database. Thanks to that, running the same query multiple times when the data hasn't changed will be very speedy.

You can customize these queries further by adding more parameters, or by using the @Query annotation to provide a SQL-like query with parameters.

Domino-Specific Queries

Since Domino is so view-heavy and DQL+QRP isn't quite at the level where you can just throw any old query+extraction at it and expect it to perform well, it made sense for me to add extensions to JNoSQL to explicitly target views as sources. I use them both here, in one case to efficiently retrieve view data without opening documents and in another in order to piggyback on an existing view used by the IP Tools services already deployed.

The @ViewEntries("ReleasesByDate") annotation causes the findRecent annotation to skip JNoSQL's normal interpretation of the method and instead be handled by the Domino driver directly. It will open that view and read entries based on the Pagination rules sent to it (another JNoSQL object). Since the columns in this view line up to the item names in the documents, I'm able to get useful entity objects out if it without having to actually crack open the docs. In practice, I'll need to be careful when using this so as to not save entities like this back into the database, since not ALL columns are present in the view, but that's a reasonable caveat to have.

The @ViewDocuments("IP Management\\Pending Releases") annotation causes findPendingReleases to read full documents out of the named view, ignoring view columns. Eventually, I'll likely replace this with an equivalent query in JNoSQL's dialect, but for now it's more practical to just use the existing view like a stored query and not have to translate the selection formula to another mechanism.

Repository Provider

The last thing to touch on with this repository is the @RepositoryProvider annotation. The OpenNTF web site is stored in its own NSF, and then references several other NSFs, such as the projects DB, the blog DB (which is still based on BlogSphere), and the patron directory. The @RepositoryProvider annotation allows me to tell JNoSQL to use a different database than the current one, and it does so by finding a matching CDI producer method that gives it a lotus.domino.Database housing the documents and a high-privilege lotus.domino.Session to create QRP views. In this app's case, that's this in another bean:

1
2
3
4
5
6
7
8
@Produces
@jakarta.nosql.mapping.Database(value = DatabaseType.DOCUMENT, provider = "projectsRepository")
public DominoDocumentCollectionManager getProjectsManager() {
    return new DefaultDominoDocumentCollectionManager(
        () -> getProjectsDatabase(),
        () -> getSessionAsSigner()
    );
}

I'll touch on what the heck a @Produces method is in CDI later, but for now you can take it for granted that this works. The getProjectsDatabase() method that it calls is a utility method that opens the project DB based on some configuration documents.

I'll note with no small amount of pleasure that this bean that provides databases is one of the only two places in the app that actually reference Domino API classes at all, and the other instance is just to convert Notes names. I'm considering ways to remove this need as well, perhaps making it so that this producer only needs to provide a path to the target database and the name of a high-privilege user to act as, and then the driver would do the session creation and DB opening itself.

Next Up

In the next post, I'll most likely talk about my use of CDI to handle the "managed beans" layer. In a lot of ways, that will just be demonstrating the way CDI makes the tasks you'd otherwise accomplish with XPages Managed Beans simpler and more code-focused, but (as the @Produces annotation above implies) there's a lot more to it.

Rewriting The OpenNTF Site With Jakarta EE: REST

Jun 20, 2022, 1:09 PM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

In deciding how to kick off implementation specifics of my new OpenNTF site project, I had a few options, and none of them perfect. I considered starting with the managed beans via CDI, but most of those are actually either UI support beans or interact primarily with other components. I ended up deciding to talk a bit about the REST services in the app, since those are both an extremely-common task to perform in XPages and one where the JEE project runs laps around what you get by default from Domino.

The REST layer is handled by Jakarta REST, which is still primarily called by its old name JAX-RS. JAX-RS has existed in Domino for a good while via the Wink implementation included with the Extension Library, but that's a much-older version. Additionally, that implementation didn't include a lot of convenience features like automatic JSON conversion out of the box. The implementation in the XPages JEE Support project uses RESTEasy, which is one of the primary active implementations and covers the latest versions of the spec.

Example

Though the primary way JAX-RS is actually used in this app is as the backbone for the UI with MVC, that'll be a topic for later. Since I also plan to use this as a way to modernize the IP Management tools I wrote, I'm making some JSON-based services for that.

I have a service that lets me get a list of project releases that haven't yet been approved, as well as an endpoint to mark one as approved. That class looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
package webapp.resources.iptools;

import java.text.MessageFormat;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;

import jakarta.annotation.security.RolesAllowed;
import jakarta.inject.Inject;
import jakarta.validation.constraints.NotEmpty;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.projects.ProjectRelease;

@Path("iptools/projects")
@RolesAllowed("[IPManager]")
public class IPProjectsResource {
    
    @Inject
    private ProjectRelease.Repository projectReleases;
    
    @GET
    @Path("pendingReleases")
    @Produces(MediaType.APPLICATION_JSON)
    public Map<String, Object> getPendingReleases() {
        return Collections.singletonMap("payload", projectReleases.findPendingReleases().collect(Collectors.toList()));
    }
    
    @POST
    @Path("releases/{documentId}/approve")
    @Produces(MediaType.APPLICATION_JSON)
    public boolean approveRelease(@PathParam("documentId") @NotEmpty String documentId) {
        ProjectRelease release = projectReleases.findById(documentId)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Could not find project for UNID {0}", documentId)));
        release.markApprovedForCatalog(true);
        projectReleases.save(release);
        
        return true;
    }
}

We can ignore the ProjectRelease.Repository business, since that's the model objects making use of Jakarta NoSQL - that'll be for later. For now, we can just assume that methods like findPendingReleases and findById do what you might assume based on their names.

The resource as a whole is marked as available at the path iptools/projects. In an NSF, that will resolve to a path on the server like /foo.nsf/xsp/app/iptools/projects. The "app" part there is customizable, though the "xsp" part is unchangeable, at least for now: it's the way the XPages stack notices that it's supposed to handle this URL instead of passing it to the classic Domino web server side.

The @RolesAllowed annotation allows me to restrict use of all the methods in this resource to specific roles or names/globs from the ACL. Though the underlying documents will still be protected by the ACL and reader/author fields, it's still good practice to not make services publicly available unless there's a reason to do so.

Often, a resource class like this will have a method marked with @GET but no @Path annotation, which would match the base URL from the class level. That isn't the case here, though: I may eventually merge these methods into an overall projects API, but for now I'm mirroring the old one I made, which doesn't have that.

JSON Conversion

The getPendingReleases method shows off a nice advantage over the older way I was doing this. In the original app, I had a utility class that used Gson to process arbitrary objects and convert them to JSON. Here, since I'm working on top of the whole JEE framework, I don't have to care about that in the app. I can just return my payload object and know that the scaffolding beneath me will handle the fiddly details of translating it to JSON for the browser, based on the @Produces(MediaType.APPLICATION_JSON) annotation there. It happens to use Jakarta JSON Binding (JSON-B), but I don't have to know that. I can just be confident that it will emit JSON representing the documents in a predictable way.

Entity Manipulation

The approveRelease method is available with a URL like /foo.nsf/xsp/app/iptools/projects/releases/12345678901234567890123456789012/approve. With the UNID from the path, I call projectReleases.findById to find the release document with that ID. That method returns an Optional<ProjectRelease> to cover the case that it doesn't exist - the orElseThrow method of Optional allows me to "unwrap" it when present or otherwise throw a NotFoundException. In turn, that exception (part of JAX-RS) will be translated to an HTTP 404 response with the provided message.

I used a @NotEmpty annotation on the @PathParam parameter here since this would currently also match a URL like /foo.nsf/xsp/app/iptools/projects/releases//approve. While I could check for an empty ID, this is a little cleaner and can provide a better error message to the calling user. That's just another nice way to make use of the underlying stack to get better behavior with less code.

The markApprovedForCatalog method on the model object just handles setting a couple fields:

1
2
3
4
5
6
7
8
public void markApprovedForCatalog(boolean approved) {
    if(approved) {
        this.releaseStatus = ReleaseStatus.Yes;
        this.docAuthors = Arrays.asList(ROLE_ADMIN);
    } else {
        this.releaseStatus = ReleaseStatus.No;
    }
}

Then projectReleases.save(release) will store the document in the NSF, throwing an exception in the case of any validation failures. Like with the @NotEmpty parameter annotation above, I don't have to worry about handling that explicitly: Jakarta NoSQL will handle that implicitly for me, since it works with the Bean Validation spec the same way JAX-RS does.

Next Components

Next time I write about this, I figure I'll go over the specific NoSQL entities I've set up and discuss how they handle data access for the app. That will be similar to a number of my recent posts, but I think it'll be helpful to have an example of using that in practice rather than just talking about it hypothetically.

Rewriting The OpenNTF Site With Jakarta EE, Part 1

Jun 19, 2022, 10:13 AM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

The design for the OpenNTF home page has been with us for a little while now and has served us pretty well. It looks good and covers the bases it needs to. However, it's getting a little long in the tooth and, more importantly, doesn't cover some capabilities that we're thinking of adding.

While we could potentially expand the current one, this provides a good opportunity for a clean start. I had actually started taking a swing at this a year and a half ago, taking the tack that I'd make a webapp and deploy it using the Domino Open Liberty Runtime. While that approach would put all technologies on the table, it'd certainly be weirder to future maintainers than an app inside an NSF (at least for now).

So I decided in the past few weeks to pick the project back up and move it into an NSF via the XPages Jakarta EE Support project. I can't say for sure whether I'll actually complete the project, but it'll regardless be a good exercise and has proven to be an excellent way to find needed features to implement.

I figure it'll also be useful to keep something of a travelogue here as I go, making posts periodically about what I've implemented recently.

The UI Toolkit

The original form of this project used MVC and JSP for the UI layer. Now that I was working in an NSF, I could readily use XPages, but for now I've decided to stick with the MVC approach. While it will make me have to solve some problems I wouldn't necessarily have to solve otherwise (like file uploads), it remains an extremely-pleasant way to write applications. I am also not constrained to this: since the vast majority of the logic is in Java beans and controller classes, switching the UI front-end would not be onerous. Also, I could theoretically mix JSP, JSF, XPages, and static HTML together in the app if I end up so inclined.

In the original app (as in this blog), I made use of WebJars to bring in JavaScript dependencies, namely Hotwire Turbo to speed up in-site navigation and use Turbo Frames. Since the NSF app in Designer doesn't have the Maven dependency mechanism the original app did, I just ended up copying the contents of the JAR into WebContent. That gave me a new itch to scratch, though: I'd love to be able to have META-INF/resources files in classpath JARs picked up by the runtime and made available, lowering the number of design elements present in the NSF.

The Data Backend

The primary benefit of this project so far has been forcing me to flesh out the Jakarta NoSQL driver in the JEE support project. I had kind of known hypothetically what features would be useful, but the best way to do this kind of thing is often to work with the tool until you hit a specific problem, and then solve that. So far, it's forced me to:

  • Implement the view support in my previous post
  • Add attachment support for documents, since we'll need to upload and download project releases
  • Improve handling of rich text and MIME, though this also has more room to grow
  • Switched the returned Streams from the driver to be lazy loading, meaning that not all documents/entries have to be read if the calling code stops reading the results partway through
  • Added the ability to use custom property types with readers/writers defined in the NSF

Together, these improvements have let me have almost no lotus.domino code in the app. The only parts left are a bean for formatting Notes-style names (which I may want to make a framework service anyway) and a bean for providing access to the various associated databases used by the app. Not too shabby! The app is still tied to Domino by way of using the Domino-specific extensions to JNoSQL, but the programming model is significantly better and the amount of app code was reduced dramatically.

Next Steps

There's a bunch of work to be done. The bulk of it is just implementing things that the current XPages app does: actually uploading projects, all the stuff like discussion lists, and so forth. I'll also want to move the server-side component of the small "IP Tools" suite I use for IP management stuff in here. Currently, that's implemented as Wink-based JAX-RS resources inside an OSGi bundle, but it'll make sense to move it here to keep things consolidated and to make use of the much-better platform capabilities.

As I mentioned above, I can't guarantee that I'll actually finish this project - it's all side work, after all - but it's been useful so far, and it's a further demonstration of how thoroughly pleasant the programming model of the JEE support project is.