Showing posts for tag "java"

XPages Jakarta EE 2.9.0 and Next Steps

Nov 22, 2022, 12:53 PM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. Adding Concurrency to the XPages Jakarta EE Support Project
  11. Adding Transactions to the XPages Jakarta EE Support Project
  12. XPages Jakarta EE 2.9.0 and Next Steps

Keeping with my productive week off, today I release version 2.9.0 of the XPages Jakarta EE Support project. Similar to the previous release, this one contains new features primarily related to Jakarta NoSQL, but also has some improvements for JSF and a bunch of bug fixes and compatibility improvements.

Jakarta NoSQL

The improvements to the JNoSQL driver come from some needs I came across when moving older lotus.domino/ODA-based code to using JNoSQL repositories. In particular, I added the remaining applicable view entry properties as available fields to map, added better support for reading note IDs, and fetching documents by note ID.

JSF

While JSF support remains limited by not having a proper way to add in third-party component libraries like PrimeFaces, it's still a potentially-compelling tool in an NSF as an alternative to XPages in some cases. Accordingly, I fixed a few bugs I had run into when loading pages after modifying the NSF design. Additionally, I fixed up support for JSF as an MVC view engine. It now properly joins JSP as a mechanism for rendering your output with an MVC structure, and I think there's some real potential there.

Bug Fixes and Compatibility

Most of the other closed issues deal with a few bugs here and there, and in particular involve some improvements for running apps in XPiNC and on a server with Domino Leap also installed. I don't use XPiNC anymore and haven't tried Leap, so I greatly appreciate bug reports specific to these and the assistance in tracking down the trouble.

The Future and Next Steps

I'm pondering now what the next release of the project will focus on. I have no shortage of feature ideas, and there are a few potentially-disruptive changes I'd like to make.

Unfortunately, those changes will be largely confined to improving the support for the specs that are already present and not advancing to new versions. The predicted Java-version wall arrived: Jakarta EE 10 is out and requires Java 11 and above. Since Domino remains mired in Java 8, that means that new versions of the specs and implementations are hard-incompatible until that changes.

On the plus side, there's still a lot of improvements I can make with Jakarta EE 9 as the baseline.

Reorganization

One big one I've been thinking about is a reorganization of the individual libraries that make up the project. The way it's been designed, almost every spec has its own Equinox Feature and XPage Library to go with it. This was fine early on when it was just CDI, EL, and JAX-RS, but it's grown annoying: installing the project in Designer is a seemingly-endless process of approving each plug-in at a time and the list of libraries to check in Xsp Properties is interminable. More critically, being able to selectively turn on and off specs like this doesn't make sense anymore. CDI has grown so important to Jakarta EE in general and this spec in particular that it doesn't make sense to not have it present if you're going to use this project at all. It's a foundational component of so many other parts and is essentially The Way to do Jakarta-based development.

So I'm thinking I'm going to reorganize the projects into fewer features and libraries, which will be a breaking change that will necessitate a bump to 3.0 - fortunately, the numbers line up well for that. I have a few potential options here:

  1. Just lump them all into one. You'd have basically one big switch to say "this is a JEE project" and everything would be on. The virtue here is that this is how I already work and is essentially the recommended way to do things. Additionally, as far as I know, while having additional components may slow first load (though not as much as other parts), I don't think they have a significant impact if enabled but unused during runtime.
  2. Try to line the specs up with one of the existing Jakarta Profiles. Those profiles are meant to be curated selections of useful specs, and this project has enough to implement what in newer versions is deemed the Core Profile. The trouble with this, though, is that the Core Profile is very much geared to be the shared subset with MicroProfile and similar and is a bit thin for Domino's monolith-focused development style. The Web and Full profiles, on the other hand, require "traditional" APIs like EJB that are not present in this project.
  3. Break them apart into my own "core" and "optional" features. For example, it doesn't make sense to use this without JAX-RS, CDI, and Bean Validation enabled, but JSF is entirely independent of the other specs and is among the least likely to be used in practice for now. This would also allow me to establish a running flow where "experimental" features start out as optional add-ins and then eventually make their way to core.

I'm currently waffling between #1 and #3, with a slight lean towards #1. If I can be sure that either everything or nothing is present, I could get rid of some weird hedges and workarounds, like how the JAX-RS implementation doesn't "officially" know about the CDI library yet references CDI classes explicitly by name.

New Application Types

Currently, to use this project, you can either put your code into an NSF and use the automatic behavior of the libraries or you can put your code in OSGi-based webapps or Servlets and then manually manage integration with these specs.

Both of these are limited by their reliance on the many assumptions IBM built in to how these apps should work. In-NSF apps require that all Jakarta code come from a request including "xsp" in the URL or to a file ending in ".jsp", ".jsf", or ".xhtml". If you're writing, say, an MVC-based app, all of your URLs are going to have to start with something like "foo.nsf/xsp/app/...", which is okay but ugly. Additionally, the way these apps are implemented - NSFComponentModule - severely limits my hooks for listening for things like application and session expiration, which hampers CDI's lifecycle handling a bit.

For a good while, I've pondered the notion of adding another ComponentModule type to handle the case where you want to go all-in on Jakarta EE. With this idea, the new module implementation would have full control over incoming requests, allowing URLs without the xsp/app bit in there, and would have better handling of lifecycles. In this way, I could make it so that your could would look more like (or be identical to) a "normal" .war-based webapp, with fewer workarounds for the existing XPages stuff. This would also allow me to do things like lessen the amount of Servlet 2.5-to-5.0 bridging and could assist tremendously in improving JSF support.

Along similar lines, I've been considering doing something similar for OSGi-based webapps, and I've made some progress along those lines in a feature branch. The idea here would be to do something similar to how you can deploy web.xml-based webapps via OSGi now, but with built-in support for Jakarta EE 9 features (with web.xml then being optional). With this setup, you'd be able to write an app that does an Import-Package for the various jakarta.* packages you want and add a bit in your MANIFEST.MF to signal to this project that it should participate. This could either be a variant of the extension point used by the existing OSGi webapp support or using the Web-ContextPath directive from the OSGi spec. One of the goals here would be to make it so that you would be able to write a Jakarta EE 9 app using normal development tools - Eclipse/IntelliJ/VS Code, Maven, etc. - and then just use maven-bundle-plugin to add the OSGi info you need without having any specific dependencies on Domino bits, especially the nightmare of depending on the non-redistributable XPages OSGi artifacts.

Other Options

And, in the mean time, I have a bunch of other tasks I could work on. Slowly converting my client project to Jakarta NoSQL instead of direct ODA use has turned up a whole slew of things that would be useful to add (for example, stampAll support), so I can slowly burn down that feature-request list.

There's also the notion of documentation! While a lot of the behavior of this project is in theory documented by virtue of the upstream specs and the general world of Jakarta blogs, videos, and courses, there's enough to know about the specifics of the interactions with Domino that more documentation is in order. Historically, I've just done this by expanding the README, but it's gotten pretty unwieldy at this point. It would probably make sense to break the specifics and examples out into at least wiki pages, if not a format that can be built into a PDF/etc. and included in the distribution.

So yep, I'll have my hands busy with this thing for a good while more, I figure.

More Open-Source Updates for Notes/Domino 12.0.2

Nov 21, 2022, 1:27 PM

The other day, I talked about some changes/workarounds for Notes/Domino 12.0.2. Today, I made a few updates to some of the open-source projects I maintain, including another update to the generate-domino-update-site Maven plugin.

Domino Update Site Generator

In the 4.2.0 release, I added code to (mostly, as it turns out) account for HCL moving the NAPI implementation JAR down to jvm/lib/ext. In subsequent use, I found that, while that will suffice for building applications that use the OSGi dependencies, it didn't work for launching applications using it as a baseline - namely, the NSF ODP Tooling Maven plugin.

Today, I released a 4.2.1 version that improves this behavior by re-adding dependencies in the implementation bundle.

I also created a project page for it on OpenNTF. Though the project has always been hosted at the OpenNTF org on GitHub, I hadn't created a project page for it due to it just being a standalone Maven plugin. I figured it'd be useful to create an official page there for it, though.

NSF ODP Tooling

Speaking of the NSF ODP Tooling, I also found that local operations once again started crashing on macOS. Due to changes in macOS and the very weird ways that Notes works, local operations on there are a very moving target, and I have to do a lot of work in the project to account for changes to the embedded JVM and whether specific Notes versions work better with HotSpot or OpenJ9 JVMs.

Long story short, I release 3.10.0 today to account for this. Though I've found that the spawned JVM will still sometimes crash, it's after completing its work, so I considered that fine for now.

OpenNTF Domino API

Finally, I came to the OpenNTF Domino API. This project has admittedly been neglected for a little while: I'm the only active maintainer, and the client project I use it in targets Domino 11.0.1, so the 12.x builds have remained in an incomplete state for a while.

With the release of 12.0.2, I decided I should finish the wrappers for the new classes added in 12.x, so I did so and uploaded a build for 12.0.2. This primarily adds those wrappers, but also included a contributed fix and changes the distribution packaging to combine the XSP and non-XSP versions.

Notes/Domino 12.0.2 Fallout

Nov 17, 2022, 1:45 PM

Tags: designer java
  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout

Notes and Domino 12.0.2 came out today. Generally, there are some neat features in development and on the server, but there are also a couple things you may run into depending on your workflow and installation type.

Update Site NSF

The update site NSF that ships with Domino uses SWT for some of its GUI elements when importing contents. This still works fine in the 32-bit client, but is broken in the 64-bit client. My guess on that front is that the 64-bit client doesn't come with a 64-bit native SWT JAR, probably because the SWT version used for this likely pre-dates x64's popularity on the desktop.

For now, the workaround is to use a 32-bit client if you're working with the Update Site NSF. Karsten said he's going to patch the OpenNTF version of the NSF to deal with this, so you can also wait for that one.

Domino Update Site Generator

I maintain the generate-domino-update-site Maven plugin that can be used to generate update sites for OSGi development against the Domino stack. These sites are replacements for the IBM Domino Update Site for Build Management, which was released for 9.0.1 and never updated since. Only HCL can make a new distributable version of that, so my tool lets you generate one for yourself from a Notes or Domino installation.

In 12.0.2, HCL shunted the NAPI implementation JAR down to jvm/lib/ext to support the shared JARs between XPages and agents feature. As a side effect, existing versions of my plugin would lack the NAPI classes.

Today, I released version 4.2.0, which fixes this and also contains improvements to let the plugin work on current Java versions and generate sites based on 12+ macOS clients.

Along these lines, I made Aha idea DDXP-I-352 a couple years ago to request that HCL provide such sites themselves or give OpenNTF permission to provide them, so I'd appreciate it if you voted for that.

The Target Platform Bug

This isn't a new thing, but it's worth mentioning here since it comes up frequently: the target platform bug from 9.0.1FP10 remains. As of earlier this year, defect article KB0086688 mentions this, though the status is "Deferred". If this afflicts you, it may help to bring it up with Support and reference that article.

The Myriad Idioms For Finding Implementations In Java

Oct 18, 2022, 10:25 AM

Tags: jakartaee java
  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI
  4. The Myriad Idioms For Finding Implementations In Java

A few years ago, I wrote a post about Java service location, which covered things like META-INF/services and OSGI extensions. Today, I'd like to discuss a similar concept: code in a top-level API that finds a specific implementation. For reasons that will become clear shortly, I'll call this the "FactoryFinder pattern".

Background

Not all Java code uses this kind of thing and, while service loading is related, the overlap isn't complete. Where this does come up a lot is in a framework like Jakarta EE, which is very intentionally split between vendor-neutral specification classes/interfaces (the ones starting with jakarta.*) and specific implementations.

For example, the Jakarta REST (née JAX-RS) specification only defines various classes and interfaces within the jakarta.ws.rs package space, but doesn't include any actual implementation. That's left to various vendors. The number of implementations varies by spec, and JAX-RS is particularly prolific on this front. In the XPages Jakarta EE project, we use RESTEasy, whose classes are all in the org.jboss.resteasy package space.

There's a (usually) hard wall between these layers: the spec declares an API that programmers can use, and then the implementation has to allow itself to be called by those class names and obey the specification's rules. When writing JAX-RS resources in an NSF, the fact that it's using RESTEasy does not enter into your experience. That raises the question, though, of how this works. How does the vendor-neutral specification locate the implementation classes to hand off the work? Well, that question has a number of different answers.

Entrypoint Classes and Locating Implementations

In general, each spec accomplishes this using one or more entrypoint classes. For example, JAX-RS uses RuntimeDelegate and its static getInstance() method to locate server implementations and ClientBuilder and its newBuilder() method to load client implementations. Outwardly, these methods just promise that they'll find and provide an implementation, but the actual way that specs do this varies.

One of the most common ways to coordinate this loading is to have a class named FactoryFinder. This idiom and specific name proved very popular over at Sun as they built up the JEE specs:

Eclipse Open Type dialog for FactoryFinder

Despite their identical names, each of these classes is a different implementation, and they have different characteristics. There are routines in common, and each spec uses a subset of these. I'll go over the common ones here, in no particular order other than that I'll start with the ones found in the JAX-RS API first.

ServiceLoader

This one is used in basically every spec up until the latest era. This uses the java.util.ServiceLoader class to find implementations by way of text files in META-INF/services named after the spec class and containing implementation class names. For example, RESTEasy contains a file named META-INF/services/jakarta.ws.rs.ext.RuntimeDelegate that references the class org.jboss.resteasy.core.providerfactory.ResteasyProviderFactoryImpl. That looks like this:

1
2
3
4
5
Iterator<T> iterator = ServiceLoader.load(service, FactoryFinder.getContextClassLoader()).iterator();

if(iterator.hasNext()) {
	return iterator.next();
}

FactoryFinder.getContextClassLoader() there is a utility method that just uses an AccessController block to work with Java policy limitations like we see on Domino all the time.

This is simple enough in the normal case, but can get a little tricky when you add in something like OSGi. By default, ServiceLoader will look in the thread-context class loader, which will usually be where your application code lives. Inside an app container, like an NSF, the implementation class may not actually be visible, though. Accordingly, many of these finders fall back to looking using the class loader of the spec class, which has a higher chance of seeing the implementation. That looks similar:

1
2
3
4
5
Iterator<T> iterator = ServiceLoader.load(service, FactoryFinder.class.getClassLoader()).iterator();

if(iterator.hasNext()) {
	return iterator.next();
}

In the XPages Jakarta EE project, neither of these calls will tend to work by default, since neither the app nor the API bundle won't see the implementation bundle by default. In some cases, I deal with this via the methods below, but in others I will do so by re-packaging the implementation as an OSGi fragment bundle. Fragment bundles attach themselves onto their host's classloader fully, and this allows ServiceLoader to find the implementation.

Configuration Properties

A handful of these specs, JAX-RS included, will also look for the name of an implementation class using an external properties file. The placement of this in the priority order - as a fallback after ServiceLoader - and the classes used in the implementation make me figure that these are quite often relics of earlier habits.

JAX-RS, for its part, will look within the java.home system property, which points to the JVM's installation directory. In there, it looks for a properties file named lib/jaxrs.properties:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
String javah = System.getProperty("java.home");
configFile = javah + File.separator + "lib" + File.separator + "jaxrs.properties";
File f = new File(configFile);
if (f.exists()) {
	Properties props = new Properties();
	inputStream = new FileInputStream(f);
	props.load(inputStream);
	String factoryClassName = props.getProperty(factoryId);
	return newInstance(factoryClassName, classLoader);
}

This tries to use the thread-context class loader only, so it wouldn't work for a complex app server situation. Likely, it's meant for either an older type of application or a standalone special-purpose JAR.

System Properties

Similar to reading a designated properties file, these specs will often then fall back to looking for a Java system property of a given name. These properties may be dynamically set at runtime or may be set during the JVM launch. Often, this property will be the name of the interface/abstract class being looked up, like so:

1
2
3
4
String systemProp = System.getProperty(factoryId);
if (systemProp != null) {
	return newInstance(systemProp, classLoader);
}

This one can actually come in handy sometimes - though not ideal, I've used similar cases where I set the name of an implementation or delegation class in a property before initializing the spec. It's best to avoid that when possible, but I'm often glad it's there.

OSGi Escape

Next up is one that JAX-RS doesn't use, but shows up periodically. Though Jakarta EE isn't based around OSGi, a good number of the implementations historically have used (and still use) it, and OSGi always sits in a "not standard, but too popular to consistently ignore" limbo.

To account for this, there's a similarly semi-standard library called the OSGi resource locator. This library provides a class named org.glassfish.hk2.osgiresourcelocator.ServiceLoader that does its own search and loading for ServiceLoader-compatible META-INF/services files within OSGi bundles in the current platform. The idea is that, if you have an OSGi-based platform that you want to work with this type of loading, you will provide the Resource Locator class and let any loaders written to use it fall back to it.

Because this class is not normally present even when actually in OSGi, APIs that make use of it have to be careful and indirect about trying to load it at all. We'll use JAX-B as our example here. They'll generally try to load the bridge class reflectively, which avoids having OSGi-wrapping tools like bnd create a potentially-undesired dependency on the presence of the bridge. Then, they'll reflectively ask it to load service implementations. That tends to look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// Use reflection to avoid having any dependency on ServiceLoader class
Class serviceClass = Class.forName(factoryId);
Class target = Class.forName(OSGI_SERVICE_LOADER_CLASS_NAME);
Method m = target.getMethod(OSGI_SERVICE_LOADER_METHOD_NAME, Class.class);
Iterator iter = ((Iterable) m.invoke(null, serviceClass)).iterator();
if (iter.hasNext()) {
	Object next = iter.next();
	logger.fine("Found implementation using OSGi facility; returning object [" +
		next.getClass().getName() + "].");
	return next;
} else {
	return null;
}

That's also generally wrapped in a big try/catch block to avoid gumming up the works if any pieces are missing.

The XPages Jakarta EE project actually contains a reimplementation of this that avoids some hurdle or other that I found with the stock version. I avoided doing something like that for a while, but it ended up being the most practical way to get some of these specs working.

Default Implementation

Back outside the realm of OSGi, a handful of these specifications will also include a hard-coded default provider class name. These are generally the classes from what used to be dubbed reference implementations and which are largely components of GlassFish by virtue of that being Sun's version.

For example, the JSON-P API has a final fallback of trying to look for org.glassfish.json.JsonProviderImpl by name:

1
2
Class<?> clazz = Class.forName(DEFAULT_PROVIDER);
return (JsonProvider) clazz.getConstructor().newInstance();

Though these implementations generally also declare themselves via ServiceLoader files, this is presumably useful in historical or edge cases where there's still a decent chance that the RI will be available. This does have an unfortunate effect on error messages, though, where the case of "I can't find any implementation at all" ends up being reported as e.g. "Provider org.glassfish.json.JsonProviderImpl not found". That's not really a problem with the approach as such, though, but rather just the way it shakes out in practice.

Manually-Set Implementation or Locator

The final mechanism I'm going to discuss is sort of a final escape hatch. Sometimes, the provider class will have a method that lets you set an arbitrary implementation yourself, without having the API do any of these lookups at all. Some, like MicroProfile Config and CDI even go one step further and provide a method that configures not just a specific implementation but rather an implementation locator. These APIs are my friends and I love them.

This mechanism works well for my needs in the XPages Jakarta EE project, where either it's easier to just set one implementation for the whole server or, like with CDI, there's complex logic that requires inspecting the active Servlet request to see what NSF I'm in.

APIs of this style will usually have a method named like setInstance or setProvider on either their core entrypoint class or on the provider locator. For example, MicroProfile Config provides the former on its ConfigProviderResolver class:

1
2
3
public static void setInstance(ConfigProviderResolver resolver) {
	instance = resolver;
}

instance here is a static property. Once it's set - either by this method or by a dynamic lookup - the main instance() method will use it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
if (instance == null) {
	synchronized (ConfigProviderResolver.class) {
		if (instance != null) {
			return instance;
		}
		instance = loadSpi(ConfigProviderResolver.class.getClassLoader());
	}
}

return instance;

The XPages JEE project makes use of this method at HTTP start, setting a provider resolver that includes some stock config sources as well as some classes that know how to read properties from the Notes environment and from the xsp.properties file.

Though this mechanism seems like the crudest out of the bunch, I'm extremely happy whenever it's there.

Conclusion

That was a lot! And there's not really a lesson to be learned here, but rather more that it's often useful to know about all these different mechanisms. When working in the XPages JEE project, I've had to use almost all of them at one time or another, and I've had to familiarize myself with which APIs use which and adapt them individually. For some, I've altered the implementation to be a fragment bundle; for others, I've created my own fragment to provide services and implementations; and so forth. It's a bit of a shame that there's no grand unified system for this, but at least it can be interesting to see the messy path that these specs have taken as Java technologies and the ecosystem evolved.

Jakarta NoSQL Driver for Keep

Oct 9, 2022, 3:41 PM

Tags: jakartaee java
  1. Jakarta NoSQL Driver for the AppDev Pack, Part 1
  2. Jakarta NoSQL Driver for the AppDev Pack, Part 2
  3. Jakarta NoSQL Driver for Keep

In what has surreptitiously turned into something of a series, I followed up my recent tinkering with Jakarta NoSQL, the AppDev Pack, and Keycloak with doing something similar with Keep.

Keep, like the AppDev Pack's Proton task, provides a remote API for Domino data. It differs from the ADP in a couple notable ways:

  1. It uses REST endpoints instead of gRPC. The pros and cons of comparing the two are well beyond the scope of this post, but you can say that a REST API is easier to work with using any old HTTP client, while gRPC has higher performance when thrashed. For the purposes of a JNoSQL driver, this is entirely an implementation detail.
  2. Keep imposes rules about data access on top of what you'd normally do with Domino. While Proton and DAS give you direct document access, Keep requires configuring individual forms and views, as well as defining specific types and access levels for fields. It's a focused REST API builder of its own, suitable for providing access to Domino data to clients directly.

The Client

The AppDev Pack ships with a Java client library presumably generated from the HCL-internal spec. Since Keep is REST-based, there's less need for a specific generated client, but it can still be tremendously useful. Keep includes (and is indeed based around) an OpenAPI spec file. The neat thing with those files is that, since they're so common, there are plenty of tools to work with them. One I've used a few times now is OpenAPI Generator, which will take such a spec file and emit bindings for a bunch of languages.

I've used the Java generator before and have gotten familiar with it. Because Java lacked a good standard HTTP client until Java 9 and still doesn't have a core-API standard type-safe client or JSON library, this generator provides options for a good number of common choices. Since this driver is targeting a Jakarta EE environment and it's fair to assume MicroProfile will be around, I went with that: JSON access is done via JSON-B and REST access is done via the ever-delightful MicroProfile Rest Client. This pair ended up producing much-cleaner client code than the previous generation I had done, which used the Apache HttpClient library directly and Jackson for JSON. I think that, post-generation, I had to go in and change some javax imports to jakarta, but otherwise it worked smoothly.

The generator emitted interfaces marked with @RegisterProvider (making them accessible via CDI) and then using JAX-RS annotations to define method signatures. For example (trimmed and reformatted):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
@RegisterProvider(ApiExceptionMapper.class)
@Path("")
public interface DataApi  {
	/* snip */

    /**
     * Send a DQL query and get JSON documents back
     *
     */
    @POST
    @Path("/query")
    @Consumes({ "application/json" })
    @Produces({ "application/json" })
    public List<Map<String, Object>> query(
		@QueryParam("dataSource") @NotNull String dataSource,
		@QueryParam("action") @NotNull String action,
		@Valid QueryRequest queryRequest,
		@QueryParam("richTextAs") RichTextRepresentation richTextAs,
		@QueryParam("count") Integer count,
		@QueryParam("start") Integer start
	) throws ApiException, ProcessingException;
}

It's possible that I had to change an earlier DominoDocument type to Map<String, Object> to account for the ever-varying content of Domino documents.

In any event, each of the operations from Keep's OpenAPI spec got a method like this, grouped into interfaces like DataApi, CodeApi, etc., also based on values in the spec.

Authentication

Lucky for me, a lot of the stuff I did to set up Keycloak authentication with the AppDev Pack carried over here. Keep, for the most part, uses JWT authentication, with tokens often coming from its /auth endpoint. It can also work with external JWT providers like Keycloak. To do that, you can configure Keep with your provider's public key, after which Keep will trust tokens issued by it.

For my setup, I configured Keycloak to emit Keep-friendly tokens, adding in a Domino-format DN and Keep-friendly scopes like $DATA. Once I did that, Keep started trusting tokens I acquired from Keycloak, which I passed to it from my Liberty login the same way I had with my ADP testing.

Implementation and Conclusion

After this point, the implementation itself is actually the least interesting. I copied the code for the Proton driver and mostly swapped out method calls and object types. Since the documents in both cases are treated as JSON data, most of my utility code was already just fine.

Keep does provide view access, which is something that Proton doesn't, so I may implement that. Though I was originally hostile to the idea of adding views to the driver, it's an unfortunate necessity in Domino work, especially when building on top of existing applications.

As it is, the driver isn't in a proper state for release, but I expect that I'll have cause to return to it and implement the missing pieces - views, attachments, and so forth. It's also just a good exercise in the mean time for working more heavily with things like the MP Rest Client, and it's good to see how well it holds up.

Jakarta NoSQL Driver for the AppDev Pack, Part 2

Sep 26, 2022, 9:55 AM

Tags: java jnosql
  1. Jakarta NoSQL Driver for the AppDev Pack, Part 1
  2. Jakarta NoSQL Driver for the AppDev Pack, Part 2
  3. Jakarta NoSQL Driver for Keep

In my last post, I talked about how I implemented a partial Jakarta NoSQL driver using the AppDev Pack as a back end instead of the Notes.jar classes used by the primary implementation. Though the limitations in the ADP mean that it lacks a number of useful features compared to the primary one, it was still an interesting experiment and has the nice side effect of working with essentially any Java app server and Java version 8 or above.

Beyond the Proton API calls, the driver brought up the interesting topic of handling authentication. Proton has three ways of working in this regard:

  • Anonymous, which is what you might expect based on how that works elsewhere in Domino. This is easy but not particularly useful except in specific circumstances.
  • Client certificate authentication, where you create a TLS keychain for a given user and associate it with a Directory user (e.g. CN=My Proton App/O=MyOrg), and then your app performs all operations as that user. This is basically like if you ran a remote app with NRPC using a client Notes ID.
  • Act-as-User, which builds on the above authentication by configuring an OAuth broker service that can hand out OIDC tokens on behalf of named users. This is sort of like server-to-server communication with the "Trusted Servers" config field in the server doc, but different in key ways.

Client Certificate Authentication

When doing app development, the middle route makes sense as your starting point, since most of your actual work will likely be the same regardless of whether you later then add on act-as-user support. For that, you'll follow the guide to set up your TLS keychain and then feed those files to the com.hcl.domino.db.model.Server object:

1
2
3
4
5
6
7
Path base = Paths.get(BASE_PATH);
		
File ca = base.resolve("rootcrt.pem").toFile();
File cert = base.resolve("clientcrt.pem").toFile();
File key = base.resolve("clientkey.pem").toFile();

Server server = new Server("ceres.frostillic.us", 3003, ca, cert, key, null, null, Executors.newSingleThreadExecutor());

The use of java.io.File classes here is a bit of a shame, but not the end of the world. In practice, you'd likely store your keychain somewhere on the filesystem anyway and then feed the BASE_PATH property to your app via an environment variable. Otherwise, if they were pulled from some other source, you could use Files.createTempFile to store them on the filesystem while your app is running. Those nulls are for the passphrases for the certificates, so they might be populated for you. For the last parameter, making a new executor is fine, but you might want to hand it a ManagedExecutorService.

You can initialize this connection basically anywhere - I put it in a ServletRequestListener to init and term the object per-request, but I think it would actually be fine to do it in a ServletContextListener and keep it app-wide. I did it out of habit from Notes.jar and its heavy requirements on threads and the way Session objects are different per-user, but that's not how Proton connections work.

This form of authenticate is a prerequisite for Act-as-User below, but it might suffice for your needs anyway. For example, if you have a "utility" app, like a bot that looks up data and posts messages to Slack or something, you can call it good here.

Act-as-User

But though the above may suffice sometimes, the integration of user identity with data access is one of the hallmarks of Domino, so Domino apps that wouldn't need Act-as-User support are few and far between.

Act-as-User is a bit daunting to set up, though. In traditional server-to-server communication in Domino, it suffices to just add a server's name or group to the "Trusted Servers" list in the server doc of the server being accessed. Then, it will trust any old name that the app-housing server sends along. Generally, this will be a user that was already authenticated with Domino, like using an XPages-supplied Session object to access a remote server, but it doesn't have to be.

Act-as-User, though, uses OpenID Connect with an authentication server to do the actual authentication, and then the Proton task is told to accept those tokens as legal for acting on behalf of a given user. While you could in theory write your own OIDC server that dispenses tokens for any user name willy-nilly, in practice you'll almost definitely use an existing implementation. In the default case, that implementation will in turn almost definitely be IAM, which is an OAuth broker service with the AppDev Pack that stores its configuration data in an NSF but and reads users from Domino (or elsewhere) via LDAP.

IAM, though, isn't special in this regard. It's packaged with the ADP, sure, but the way it deals with tokens is entirely standards-based. That means that any compatible implementation can fill this role, and, since I'd heard great things about Keycloak, I figured I'd give that a shot. With some gracious assistance from Heiko Voigt, I was able to get this working - I don't want to steal his future thunder by going into too much detail, but honestly the main hurdles for me were just around learning how Keycloak works. Once you have the concepts down, you basically plug in the Keycloak client details in for the IAM ones in the same configuration.

With that set up, you can feed your user token from the web app into your Proton API calls, and then your actions will be running as your user in the same way as if you were authenticated in an on-server XPages or other app. The way this manifests in the Java API is a little weird, but it works well enough: almost all Proton calls have a varargs portion at the end of their method signature that takes OptionalArg instances. One such type is OptionalAccessToken, which takes your auth token as a String. I have a method that will stitch in an access token when present. That gets passed in when making calls, such as to read documents:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
OptionalItemNames itemNamesArg = new OptionalItemNames(itemNames);
OptionalStart startArg = new OptionalStart((int)skip);
OptionalCount countArg = new OptionalCount(limit < 1 ? Integer.MAX_VALUE : (int)limit);
			
List<Document> docs = database.readDocuments(
	dql,
	composeArgs(
		itemNamesArg,
		startArg,
		countArg
	)
).get();

App Authentication

Okay, so that's what you do when you have a setup and a token, but that leaves the process of the user actually acquiring the token. From the user's perspective, this will generally take the form of doing an "OAuth dance" where, when the user tries to access a protected resource, they're sent over to Keycloak to authenticate, which then sends them back to the app with token in hand.

There are a lot of ways one might accomplish this, varying language-to-language, framework-to-framework, and server-to-server. You will be shocked to learn that I'm using Open Liberty for my app here, and that comes with built-in support for OIDC.

Before I go further, I should put forth a big caveat: I'm really muddling through with this one for the time being. The setup I have only kind of works, and is clearly not the ideal one, but it was enough to make the connection happen. I'm not sure if the right path long-term is to keep using this built-in feature or to switch to either a different built-in option or another library entirely. So... absolutely do not take anything here as advice in the correct way to do this.

Anyway, with that out of the way, you can configure your Liberty server to talk to your Keycloak server (or IAM, probably, but I didn't do that):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<openidConnectClient id="client01"
    clientId="liberty-tester"
    clientSecret="some-client-secret"
    discoveryEndpointUrl="https://some.keycloak.server/auth/realms/master/.well-known/openid-configuration"
    signatureAlgorithm="RS256"
    sslRef="httpSsl"
    accessTokenInLtpaCookie="true"
    userIdentifier="preferred_username"
    groupIdentifier="groups">
</openidConnectClient>
<ssl id="httpSsl" trustDefaultCerts="true" keyStoreRef="myKeyStore" trustStoreRef="OIDCTrustStore" />
<keyStore id="myKeyStore" password="super-secure-password1" type="PKCS12" location="${server.config.dir}/BasicKeyStore.p12" />
<keyStore id="OIDCTrustStore" password="super-secure-password2" type="PKCS12" location="${server.config.dir}/OIDCTrustStore.p12" />

The keyStores there contain appropriate certificate chains for the TLS connection to your Keycloak server, while the clientId and clientSecret match what you configure/generate on Keyclaok for this new client app.

What got me able to actually use this token for downstream access was the accessTokenInLtpaCookie property. If you set that, then your HttpServletRequest objects after the initial one will have an oidc_access_token property on them containing your token in the format that Proton needs. So that's where the ContextDatabaseSupplier in the previous post got it:

1
2
3
4
5
6
7
@Produces
public AccessTokenSupplier getAccessToken() {
	return () -> {
		HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
		return (String)request.getAttribute("oidc_access_token");
	};
}

This is also one of the parts that makes me think I'm not quite doing this ideally. It's weird that the token shows up in only requests after the first, though that wouldn't be an impediment in a lot of app types. It's also very unfortunate to have the app use a server-specific property like that.

Fortunately, Jakarta Security 3.0 sprouted official OIDC support, though the build of Open Liberty I was using didn't quite have all the pieces in place for that - reasonable, considering Jakarta EE 10 only officially came out yesterday and this was weeks ago. It looks like that may provide the token in a contextual object, so I'll have to give that a shot once support settles in.

With my setup in place, janky as it may be, I'm able to access a resource (e.g. a JAX-RS endpoint) marked with @RolesAllowed("uma_authorization") and the server will automatically kick me over to Keycloak and then accept the token when I get back. Then, I can pick that up from the request attributes and use it for Domino data access. Keycloak is getting its user directory from Domino via LDAP in the same way as IAM usually would, but, like IAM with AD, it could be configured to use different user directories. I don't know that I'll want to do that, but it's good to know.

Conclusion

Like the original driver itself, this was mostly an educational exercise for me. I don't currently have any requirements to use the AppDev Pack or OIDC/Keycloak, but I'd wanted to dip my toes in both for a while now, and I'm pleased that I came out successful. I imagine that I'll have an occasion to implement something like this eventually. It may not be the same specific parts, but the core concepts are common, like in Keep's JWT and OAuth support. It's a neat setup, and it's definitely worth doing something similar if you have some experimentation time on your hands.

Jakarta NoSQL Driver for the AppDev Pack, Part 1

Sep 9, 2022, 10:20 AM

Tags: java jnosql
  1. Jakarta NoSQL Driver for the AppDev Pack, Part 1
  2. Jakarta NoSQL Driver for the AppDev Pack, Part 2
  3. Jakarta NoSQL Driver for Keep

Though the bulk of the work I've been doing for the XPages Jakarta EE project is to bring JEE technologies to Domino, the NoSQL driver has been designed to lead a double life: it works in an XPages context, but it's written to not have any XPages dependencies. One reason for this is that I want it to be usable if you use, for example, the Open Liberty runtime project to side-car apps on a Domino server but still use Notes.jar for data access.

Another reason for its organization, though, is that I intended for the driver to be portable across implementations. The driver itself is split into a main bundle and a ".lsxbe" implementation bundle. My original thought was to make that ready for a JNX or Domino JNA implementation, but it's pretty flexible.

Proton

And that brings me to the topic of this post: the AppDev Pack. I'd actually not used the AppDev Pack at all until I started in on this little project. While I like the idea, it at first only had a JS client, and then even with a Java client lacked some important capabilities I'd need for most of what I do. That's still the case, unfortunately, but my general annoyance with the problems of developing on top of a local Notes runtime tipped the scales to get me to investigate it.

In essence, the core part of the AppDevPack - the Proton addin - is kind of like a modern take on the CORBA driver for Notes.jar, in that it provides a remote way to make API calls on the server that are "low-level-ish" and ideally higher-performance than REST APIs. Since it's remote, it imposes no requirements on the environment of the app making the calls, and could in theory work with any language that has a gRPC library (though in practice HCL has only ever shipped JS and Java clients without providing the gRPC spec to generate others).

The API that Proton provides is heavily geared around batch operations, focusing on using DQL for querying. This suits my needs well, as Jakarta NoSQL essentially assumes you'll have something like DQL available to do arbitrary document queries. It's also geared around requesting specific items from documents to return, which further suits me well - for efficiency purposes, I wrote code into the LSXBE driver to only read desired items anyway, so that adapted naturally.

The Driver

So, over the long weekend here, I set out into writing a driver suitable for use in standalone JEE applications to access Domino via Proton. The goal here is to make it so that code written to target the LSXBE driver will work the same way with this Proton-based one, with the only differences being limitations in what Proton provides and a switch to providing a lotus.domino.Database context to providing a com.hcl.domino.db.model.Database one.

And, though the limitation list there is a bit lengthy, I accomplished the main part of my goal there. Since it shares a lot of code in common with the LSXBE driver, the implementation here is pretty small. The bulk of the code happens in ProtonDocumentCollectionManager, which is the implementation of all the raw JNoSQL primitives - insert, update, select, etc. - and then ProtonEntityConverter, which is the utility class that translates between Proton's concept of Document and JNoSQL's concept of DocumentEntity.

The Proton API is... very weird from a Java perspective. Some of it bears a little similarity to the NIO Files API if you squint, but otherwise it's just an odd duck. Fortunately, a consumer of this driver doesn't have to care about that beyond the initial connection to the server. The specifics of actually connecting to a server and opening a database are the same as with normal Proton use - you do that and then provide a CDI bean to hand off the Database object to the driver:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
@RequestScoped
public class ContextDatabaseSupplier {
	// This type of supplier is required
	@Produces
	public DatabaseSupplier get() {
		// Here, the `Database` object is set in a `ServletRequestListener`, though it likely could be app-wide
		return () -> {
			HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
			return (Database)request.getAttribute(DatabaseContextListener.ATTR_DB);
		};
	}
	
	// This supplier is optional - leaving it out or returning null will cause the driver to skip performing Act-As-User
	@Produces
	public AccessTokenSupplier getAccessToken() {
		return () -> {
			HttpServletRequest request = CDI.current().select(HttpServletRequest.class).get();
			return (String)request.getAttribute("oidc_access_token");
		};
	}
}

Next Steps

Now, admittedly, I'm not sure how much more time I'm going to put into this driver. Though there are a few enhancements that could be made - attachments and rich text, namely - I'm probably not going to use this myself in the near future. The immediate cases I have where I'd want to use this would end up involving things like views that aren't currently accessible over Proton, and so it's be really just speculative development. Still, I wanted to make this to kick the tires a bit and see what's possible, and it's got my gears turning.

There'll also be a bit more I want to go over in the next post. In the above code block, I mention Act-As-User tokens; the driver will use those when provided, and the driver itself doesn't care how you acquire them, but it's an interesting topic on its own. During my testing, I was able to get a Liberty-hosted app performing OIDC authentication for its own login and then using that token for access to Proton, and I think that that will deserve some expansion on its own.

In the mean time, if you're curious, the driver is available up on GitHub:

https://github.com/OpenNTF/jnosql-driver-proton

The artifact is published to OpenNTF's Maven repository, though you'll need to follow the instructions in the README there to add the non-redistributable Proton client library to your local Maven repo.

WebAuthn/Passkey Login With JavaSapi

Jul 5, 2022, 2:05 PM

Tags: java webauthn
  1. Poking Around With JavaSapi
  2. Per-NSF-Scoped JWT Authorization With JavaSapi
  3. WebAuthn/Passkey Login With JavaSapi

Over the weekend, I got a wild hair to try something that had been percolating in my mind recently: get Passkeys, Apple's term for WebAuthn/FIDO technologies, working with the new Safari against a Domino server. And, though some aspects were pretty non-obvious (it turns out there's a LOT of binary-data stuff in JavaScript now), I got it working thanks to a few tools:

  1. JavaSapi, the "yeah, I know it's not supported" DSAPI Java peer
  2. webauthn.guide, a handy resource
  3. java-webauthn-server, a Java implementation of the WebAuthn server components

Definition

First off: what is all this? All the terms - Passkey, FIDO, WebAuthn - for our purposes here deal with a mechanism for doing public/private key authentication with a remote server. In a sense, it's essentially a web version of what you do with Notes IDs or SSH keys, where the only actual user-provided authentication (a password, biometric input, or the like) happens locally, and then the local machine uses its private key combined with the server's knowledge of the public key to prove its identity.

WebAuthn accomplishes this with the navigator.credentials object in the browser, which handles dealing with the local security storage. You pass objects between this store and the server, first to register your local key and then subsequently to log in with it. There are a lot of details I'm fuzzing over here, in large part because I don't know the specifics myself, but that will cover it.

While browsers have technically had similar cryptographic authentication for a very long time by way of client SSL certificates, this set of technologies makes great strides across the board - enough to make it actually practical to deploy for normal use. With Safari, anything with Touch ID or Face ID can participate in this, while on other platforms and browsers you can use either a security key or similar cryptographic storage. Since I have a little MacBook Air with Touch ID, I went with that.

The Flow

Before getting too much into the specifics, I'll talk about the flow. The general idea with WebAuthn is that the user gets to a point where they're creating a new account (where password doesn't yet matter) or logged in via another mechanism, and then have their browser generate the keypair. In this case, I logged in using the normal Domino login form.

Once in, I have a button on the page that will request that the browser create the keypair. This first makes a request to the server for a challenge object appropriate for the user, then creates the keypair locally, and then POSTs the results back to the server for verification. To the user, that looks like:

WebAuthn key creation in Safari

That keypair will remain in the local device's storage - in this case, Keychain, synced via iCloud in upcoming versions.

Then, I have another button that performs a challenge. In practice, this challenge would be configured on the login page, but it's on the same page for this proof-of-concept. The button causes the browser to request from the server a list of acceptable public keys to go with a given username, and then prompts the user to verify that they want to use a matching one:

WebAuthn assertion in Safari

The implementation details of what to do on success and failure are kind of up to you. Here, I ended up storing active authentication sessions on a server in a Map keyed the SessionID cookie value to user name, but all options are open.

Dance Implementation

As I mentioned above, my main tool on the server side was java-webauthn-server, which handles a lot of the details of the processs. For this experiment, I cribbed their InMemoryRegistrationStorage class, but a real implementation would presumably store this information in an NSF.

For the client side, there's a npm module to handle the specifics, but I was doing this all on a single HTML page and so I just borrowed from it in pieces (particularly the Base64/ByteArray stuff).

On the server, I created an OSGi plugin that contained a com.ibm.pvc.webcontainer.application webapp as well as the JavaSapi service: since they're both in the same OSGi bundle, that meant I could share the same classes and memory space, without having to worry about coordination between two very-distinct parts (as would be the case with DSAPI).

The webapp part itself actually does more of the work, and does so by way of four Servlets: one for the "create keys" options, one for registering a created key, one for the "I want to start a login" pre-flight, and finally one for actually handling the final authentication. As an implementation note, getting this working involved removing guava.jar from the classpath temporarily, as the library in question made heavy use of a version slightly newer than what Domino ships with.

Key Creation

The first Servlet assumes that the user is logged in and then provides a JSON object to the client describing what sort of keypair it should create:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Session session = ContextInfo.getUserSession();
			
PublicKeyCredentialCreationOptions request = WebauthnManager.instance.getRelyingParty()
	.startRegistration(
		StartRegistrationOptions.builder()
			// Creates or retrieves an in-memory object with an associated random "handle" value
		    .user(WebauthnManager.instance.locateUser(session))
		    .build()
	);

// Store in the HTTP session for later verification. Could also be done via cookie or other pairing
req.getSession(true).setAttribute(WebauthnManager.REQUEST_KEY, request);
	
String json = request.toCredentialsCreateJson();
resp.setStatus(200);
resp.setHeader("Content-Type", "application/json"); //$NON-NLS-1$ //$NON-NLS-2$
resp.getOutputStream().write(String.valueOf(json).getBytes());

The client side retrieves these options, parses some Base64'd parts to binary arrays (this is what the npm module would do), and then sends that back to the server to create the registration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
fetch("/webauthn/creationOptions", { "credentials": "same-origin" })
	.then(res => res.json())
	.then(json => {
		json.publicKey.challenge = base64urlToBuffer(json.publicKey.challenge, c => c.charCodeAt(0));
		json.publicKey.user.id = base64urlToBuffer(json.publicKey.user.id, c => c.charCodeAt(0));
		if(json.publicKey.excludeCredentials) {
			for(var i = 0; i < json.publicKey.excludeCredentials.length; i++) {
				var cred = json.publicKey.excludeCredentials[i];
				cred.id = base64urlToBuffer(cred.id);
			}
		}
		navigator.credentials.create(json)
			.then(credential => {
				// Create a JSON-friendly payload to send to the server
				const payload = {
					type: credential.type,
					id: credential.id,
					response: {
						attestationObject: bufferToBase64url(credential.response.attestationObject),
						clientDataJSON: bufferToBase64url(credential.response.clientDataJSON)
					},
					clientExtensionResults: credential.getClientExtensionResults()
				}
				fetch("/webauthn/create", {
					method: "POST",
					body: JSON.stringify(payload)
				})
			})
	})

The code for the second call on the server parses out the POST'd JSON and stores the registration in the in-memory storage (which would properly be an NSF):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
String json = StreamUtil.readString(req.getReader());
PublicKeyCredential<AuthenticatorAttestationResponse, ClientRegistrationExtensionOutputs> pkc = PublicKeyCredential
	.parseRegistrationResponseJson(json);

// Retrieve the request we had stored in the session earlier
PublicKeyCredentialCreationOptions request = (PublicKeyCredentialCreationOptions) req.getSession(true)
	.getAttribute(WebauthnManager.REQUEST_KEY);

// Perform registration, which verifies that the incoming JSON matches the initiated request
RelyingParty rp = WebauthnManager.instance.getRelyingParty();
RegistrationResult result = rp
	.finishRegistration(FinishRegistrationOptions.builder().request(request).response(pkc).build());

// Gather the registration information to store in the server's credential repository
DominoRegistrationStorage repo = WebauthnManager.instance.getRepository();
CredentialRegistration reg = new CredentialRegistration();
reg.setAttestationMetadata(Optional.ofNullable(pkc.getResponse().getAttestation()));
reg.setUserIdentity(request.getUser());
reg.setRegistrationTime(Instant.now());
RegisteredCredential credential = RegisteredCredential.builder()
	.credentialId(result.getKeyId().getId())
	.userHandle(request.getUser().getId())
	.publicKeyCose(result.getPublicKeyCose())
	.signatureCount(result.getSignatureCount())
	.build();
reg.setCredential(credential);
reg.setTransports(pkc.getResponse().getTransports());

Session session = ContextInfo.getUserSession();
repo.addRegistrationByUsername(session.getEffectiveUserName(), reg);

Login/Assertion

Once the client has a keypair and the server knows about the public key, then the client can ask the server for what it would need if one were to log in as a given name, and then uses that information to make a second call. The dance on the client side looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
var un = /* fetch the username from somewhere, such as a login form */
fetch("/webauthn/assertionRequest?un=" + encodeURIComponent(un))
	.then(res => res.json())
	.then(json => {
		json.publicKey.challenge = base64urlToBuffer(json.publicKey.challenge, c => c.charCodeAt(0));
		if(json.publicKey.allowCredentials) {
			for(var i = 0; i < json.publicKey.allowCredentials.length; i++) {
				var cred = json.publicKey.allowCredentials[i];
				cred.id = base64urlToBuffer(cred.id);
			}
		}
		navigator.credentials.get(json)
			.then(credential => {
				const payload = {
					type: credential.type,
					id: credential.id,
					response: {
						authenticatorData: bufferToBase64url(credential.response.authenticatorData),
						clientDataJSON: bufferToBase64url(credential.response.clientDataJSON),
						signature: bufferToBase64url(credential.response.signature),
						userHandle: bufferToBase64url(credential.response.userHandle)
					},
					clientExtensionResults: credential.getClientExtensionResults()
				}
				fetch("/webauthn/assertion", {
					method: "POST",
					body: JSON.stringify(payload),
					credentials: "same-origin"
				})
			})
	})

That's pretty similar to the middle code block above, really, and contains the same sort of ferrying to and from transport-friendly JSON objects and native credential objects.

On the server side, the first Servlet - which looks up the available public keys for a user name - is comparatively simple:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
String userName = req.getParameter("un"); //$NON-NLS-1$
AssertionRequest request = WebauthnManager.instance.getRelyingParty()
	.startAssertion(
		StartAssertionOptions.builder()
			.username(userName)
			.build()
	);

// Stash the current assertion request
req.getSession(true).setAttribute(WebauthnManager.ASSERTION_REQUEST_KEY, request);

String json = request.toCredentialsGetJson();
resp.setStatus(200);
resp.setHeader("Content-Type", "application/json"); //$NON-NLS-1$ //$NON-NLS-2$
resp.getOutputStream().write(String.valueOf(json).getBytes());

The final Servlet handles parsing out the incoming assertion (login) and stashing it in memory as associated with the "SessionID" cookie. That value could be anything that the browser will send with its requests, but "SessionID" works here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
String json = StreamUtil.readString(req.getReader());
PublicKeyCredential<AuthenticatorAssertionResponse, ClientAssertionExtensionOutputs> pkc =
	    PublicKeyCredential.parseAssertionResponseJson(json);

// Retrieve the request we had stored in the session earlier
AssertionRequest request = (AssertionRequest) req.getSession(true).getAttribute(WebauthnManager.ASSERTION_REQUEST_KEY);

// Perform verification, which will ensure that the signed value matches the public key and challenge
RelyingParty rp = WebauthnManager.instance.getRelyingParty();
AssertionResult result = rp.finishAssertion(FinishAssertionOptions.builder()
	.request(request)
	.response(pkc)
	.build());

if(result.isSuccess()) {
	// Keep track of logins
	WebauthnManager.instance.getRepository().updateSignatureCount(result);
	
	// Find the session cookie, which they will have by now
	String sessionId = Arrays.stream(req.getCookies())
		.filter(c -> "SessionID".equalsIgnoreCase(c.getName()))
		.map(Cookie::getValue)
		.findFirst()
		.get();
	WebauthnManager.instance.registerAuthenticatedUser(sessionId, result.getUsername());
}

Trusting the Key

At this point, there's a dance between the client and server that results in the client being able to perform secure, password-less authentication with the server and the server knowing about the association, and so now the remaining job is just getting the server to actually trust this assertion. That's where JavaSapi comes in.

Above, I used the "SessionID" cookie as a mechanism to store an in-memory association between a browser cookie (which is independent of authentication) to a trusted user. Then, I made a JavaSapi service that looks for this and tries to find an authenticated user in its authenticate method:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@Override
public int authenticate(IJavaSapiHttpContextAdapter context) {
	Cookie[] cookies = context.getRequest().getCookies();
	if(cookies != null) {
		Optional<Cookie> cookie = Arrays.stream(context.getRequest().getCookies())
			.filter(c -> "SessionID".equalsIgnoreCase(c.getName())) //$NON-NLS-1$
			.findFirst();
		if(cookie.isPresent()) {
			String sessionId = cookie.get().getValue();
			Optional<String> user = WebauthnManager.instance.getAuthenticatedUser(sessionId);
			if(user.isPresent()) {
				context.getRequest().setAuthenticatedUserName(user.get(), "WebAuthn"); //$NON-NLS-1$
				return HTEXTENSION_REQUEST_AUTHENTICATED;
			}
		}
	}
	
	return HTEXTENSION_EVENT_DECLINED;
}

And that's all there is to it on the JavaSapi side. Because it shares the same active memory space as the webapp doing the dance, it can use the same WebauthnManager instance to read the in-memory association. You could in theory do this another way with DSAPI - storing the values in an NSF or some other mechanism that can be shared - but this is much, much simpler to do.

Conclusion

This was a neat little project, and it was a good way to learn a bit about some of the native browser objects and data types that I haven't had occasion to work with before. I think this is also something that should be in the product; if you agree, go vote for the ideas from a few years ago.

Rewriting The OpenNTF Site With Jakarta EE: UI

Jun 27, 2022, 3:06 PM

  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

In what may be the last in this series for a bit, I'll talk about the current approach I'm taking for the UI for the new OpenNTF web site. This post will also tread ground I've covered before, when talking about the Jakarta MVC framework and JSP, but it never hurts to reinforce the pertinent aspects.

MVC

The entrypoint for the UI is Jakarta MVC, which is a framework that sits on top of JAX-RS. Unlike JSF or XPages, it leaves most app-structure duties to other components. This is due both to its young age (JSF predates and often gave rise to several things we've discussed so far) and its intent. It's "action-based", where you define an endpoint that takes an incoming HTTP request and produces a response, and generally won't have any server-side UI state. This is as opposed to JSF/XPages, where the core concept is the page you're working with and the page state generally exists across multiple requests.

Your starting point with MVC is a JAX-RS REST service marked with @Controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package webapp.controller;

import java.text.MessageFormat;

import bean.EncoderBean;
import jakarta.inject.Inject;
import jakarta.mvc.Controller;
import jakarta.mvc.Models;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.home.Page;

@Path("/pages")
public class PagesController {
    
    @Inject
    Models models;
    
    @Inject
    Page.Repository pageRepository;
    
    @Inject
    EncoderBean encoderBean;

    @Path("{pageId}")
    @GET
    @Produces(MediaType.TEXT_HTML)
    @Controller
    public String get(@PathParam("pageId") String pageId) {
        String key = encoderBean.cleanPageId(pageId);
        Page page = pageRepository.findBySubject(key)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Unable to find page for ID: {0}", key)));
        models.put("page", page); //$NON-NLS-1$
        return "page.jsp"; //$NON-NLS-1$
    }
}

In the NSF, this will respond to requests like /foo.nsf/xsp/app/pages/Some_Page_Name. Most of what is going on here is the same sort of thing we saw with normal REST services: the @Path, @GET, @Produces, and @PathParam are all normal JAX-RS, while @Inject uses the same CDI scaffolding I talked about in the last post.

MVC adds two things here: @Inject Models models and @Controller.

The Models object is conceptually a Map that houses variables that you can populate to be accessible via EL on the rendered page. You can think of this like viewScope or requestScope in XPages and is populated in something like the beforePageLoad phase. Here, I use the Models object to store the Page object I look up with JNoSQL.

The @Controller annotation marks a method or a class as participating in the MVC lifecycle. When placed on a class, it applies to all methods on the class, while placing it on a method specifically allows you to mix MVC and "normal" REST resources in the same class. Doing that would be useful if you want to, for example, provide HTML responses to browsers and JSON responses to API clients at the same resource URL.

When a resource method is marked for MVC use, it can return a string that represents either a page to render or a redirection in the form "redirect:some/resource". Here, it's hard-coded to use "page.jsp", but in another situation it could programmatically switch between different pages based on the content of the request or state of the app.

While this looks fairly clean on its own, it's important to bear in mind both the strengths and weaknesses of this approach. I think it will work here, as it does for my blog, because the OpenNTF site isn't heavy on interactive forms. When dealing with forms in MVC, you'll have to have another endpoint to listen for @POST (or other verbs with a shim), process that request from scratch, and return a new page. For example, from the XPages JEE example app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Path("create")
@POST
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Controller
public String createPerson(
        @FormParam("firstName") @NotEmpty String firstName,
        @FormParam("lastName") String lastName,
        @FormParam("birthday") String birthday,
        @FormParam("favoriteTime") String favoriteTime,
        @FormParam("added") String added,
        @FormParam("customProperty") String customProperty
) {
    Person person = new Person();
    composePerson(person, firstName, lastName, birthday, favoriteTime, added, customProperty);
    
    personRepository.save(person);
    return "redirect:nosql/list";
}

That's already fiddlier than the XPages version, where you'd bind fields right to bean/document properties, and it gets potentially more complicated from there. In general, the more form-based your app is, the better a fit XPages/JSF is.

JSP

While MVC isn't intrinsically tied to JSP (it ships with several view engine hooks and you can write your own), JSP has the advantage of being built in to all Java webapp servers and is very well fit to purpose. When writing JSPs for MVC, the default location is to put them in WEB-INF/views, which is beneath WebContent in an NSF project:

Screenshot of JSPs in an NSF

The "tags" there are the general equivalent of XPages Custom Controls, and their presence in WEB-INF/tags is convention. An example page (the one used above) will tend to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<%@page contentType="text/html" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" session="false" %>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<t:layout>
    <turbo-frame id="page-content-${page.linkId}">
        <div>
            ${page.html}
        </div>
        
        <c:if test="${not empty page.childPageIds}">
            <div class="tab-container">
                <c:forEach items="${page.cleanChildPageIds}" var="pageId" varStatus="pageLoop">
                    <input type="radio" id="tab${pageLoop.index}" name="tab-group" ${pageLoop.index == 0 ? 'checked="checked"' : ''} />
                    <label for="tab${pageLoop.index}">${fn:escapeXml(encoder.cleanPageId(pageId))}</label>
                </c:forEach>
                    
                <div class="tabs">
                    <c:forEach items="${page.cleanChildPageIds}" var="pageId">
                        <turbo-frame id="page-content-${pageId}" src="xsp/app/pages/${encoder.urlEncode(pageId)}" class="tab" loading="lazy">
                        </turbo-frame>
                    </c:forEach>
                </div>
            </div>
        </c:if>
    </turbo-frame>
</t:layout>

There are, by shared lineage and concept, a lot of similarities with an XPage here. The first four lines of preamble boilerplate are pretty similar to the kind of stuff you'd see in an <xp:view/> element to set up your namespaces and page options. The tag prefixing is the same idea, where <t:layout/> refers to the "layout" custom tag in the NSF and <c:forEach/> refers to a core control tag that ships with the standard tag library, JSTL. The <turbo-frame/> business isn't JSP - I'll deal with that later.

The bits of EL here - all wrapped in ${...} - are from Expression Language 4.0, which is the current version of XPages's aging EL. On this page, the expressions are able to resolve variables that we explicitly put in the Models object, such as page, as well as CDI beans with the @Named annotation, such as encoderBean. There are also a number of implicit objects like request, but they're not used here.

In general, this is safely thought of as an XPage where you make everything load-time-bound and set viewState="nostate". The same sorts of concepts are all there, but there's no concept of a persistent component that you interact with. Any links, buttons, and scripts will all go to the server as a fresh request, not modifying an existing page. You can work with application and session scopes, but there's no "view" scope.

Hotwired Turbo

Though this app doesn't have much need for a lot of XPages's capabilities, I do like a few components even for a mostly "read-only" app. In particular, the <xe:djContentPane/> and <xe:djTabContainer/> controls have the delightful capability of deferring evaluation of their contents to later requests. This is a powerful way to speed up initial page load and, in the case of the tab container, skip needing to render parts of the page the user never uses.

For this and a couple other uses, I'm a fan of Hotwired Turbo, which is a library that grew out of 37 Signals's Rails-based development. The goal of Turbo and the other Hotwired components is to keep the benefits of server-based HTML rendering while mixing in a lot of the niceties of JS-run apps. There are two things that Turbo is doing so far in this app.

The first capability is dubbed "Turbo Drive", and it's sort of a freebie: you enable it for your app, tell it what is considered the app's base URL, and then it will turn any in-app links into "partial refresh" links: it downloads the page in the background and replaces just the changed part on the page. Though this is technically doing more work than a normal browser navigation, it ends up being faster for the user interface. And, since it also updates the URL to match the destination page and doesn't require manual modification of links, it's a drop-in upgrade that will also degrade gracefully if JavaScript isn't enabled.

The second capability is <turbo-frame/> up there, and it takes a bit more buy-in to the JS framework in your app design. The way I'm using Turbo Frames here is to support the page structure of OpenNTF, which is geared around a "primary" page as well as zero or more referenced pages that show up in tabs. Here, I'm buying in to Turbo Frames by surrounding the whole page in a <turbo-frame/> element with an id using the page's key, and then I reference each "sub-page" in a tab with that same ID. When loading the frame, Turbo makes a call to the src page, finds the element with the matching id value, and drops it in place inside the main document. The loading="lazy" parameter means that it defers loading until the frame is visible in the browser, which is handy when using the HTML/CSS-based tabs I have here.

I've been using this library for a while now, and I've been quite pleased. Though it was created for use with Rails, the design is independent of the server implementation, and the idioms fit perfectly with this sort of Java app too.

Conclusion

I think that wraps it up for now. As things progress, I may have more to add to this series, but my hope is that the app doesn't have to get much more complicated than the sort of stuff seen in this series. There are certainly big parts to tackle (like creating and managing projects), but I plan to do that by composing these elements. I remain delighted with this mode of NSF-based app development, and look forward to writing more clean, semi-declarative code in this vein.

Rewriting The OpenNTF Site With Jakarta EE: Beans

Jun 24, 2022, 5:03 PM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

Now that I've covered the basics of REST services and data access in the new OpenNTF web site, I'll dive a bit into the use of CDI for beans. The two previous topics implied some of the deeper work of CDI, with the @Inject annotation being used by CDI to supply bean and proxy values, but in those cases it was fine to just assume what it was doing.

CDI itself - Contexts and Dependency Injection - contains more capabilities than I'll cover here. Some of them, like its event/observer system, are things that I'll probably end up using in this app, but haven't made their way in yet. For now, I'll talk about the basic "managed beans" level and then build to the way Jakarta NoSQL uses its proxy-bean capabilities.

Managed Beans

In the OpenNTF site, I use a couple beans, some to provide scoped state and some to provide "services" for the app. I'll start with one of the simpler ones, a bean used to convert Markdown to HTML using CommonMark. I use a more-complicated version of this bean in my blog, but for now the OpenNTF one is small:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
package bean;

import org.commonmark.node.Node;
import org.commonmark.parser.Parser;
import org.commonmark.renderer.html.HtmlRenderer;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Named;

@ApplicationScoped
@Named("markdown")
public class MarkdownBean {
    private Parser markdown = Parser.builder().build();
    private HtmlRenderer markdownHtml = HtmlRenderer.builder()
            .build();

    public String toHtml(final String text) {
        Node parsed = markdown.parse(text);
        return markdownHtml.render(parsed);
    }
}

The core concepts here are exactly the same as you have with XPages Managed Beans. The "bean" itself is just a Java object and doesn't need to have any particular special characteristics other than, if it's stored in a serialized context, being Serializable or otherwise storable. The only difference here for that purpose is that, rather than being configured in faces-config.xml, the bean attributes are defined inline (there's a "beans.xml" for explicit definitions, but it's not needed in common cases). Here, the @ApplicationScoped annotation will cover its scope and the @Named annotation will allow it to be addressable by name in contexts like JSP or XPages. A CDI bean doesn't have to be named, but it's common in cases where the bean will be used in the UI.

Once a bean is defined, the most common way to use it is to use the @Inject annotation on another CDI-capable class, such as another bean or a JAX-RS resource. For example, it could be injected into a controller class like:

1
2
3
4
5
6
7
8
@Path("/blog")
@Controller
public class BlogController {
    @Inject
    private MarkdownBean markdown;

    // (snip)
}

CDI will handle the dirty business of making sure the field is populated, and that all scopes are respected. You can also retrieve a bean programmatically, with just a bit of gangliness:

1
MarkdownBean markdown = CDI.current().select(MarkdownBean.class).get();

You can think of that one as roughly equivalent to ExtLibUtil.resolveVariable(...).

By default, CDI comes with a few main scopes for our normal use: @ApplicationScoped, @SessionScoped, @RequestScoped, and @ConversationScoped. The last one is a bit weird: it kind of covers whatever your framework considers a "conversation". It's kind of like the view scope in XPages, and in the XPages JEE support project I mapped it to that, but it could also potentially be a conversation between distinct pages in an app. JSF, for its part, has its own @ViewScoped annotation, and I'm considering stealing or reproducing that.

That touches on the last bit I'll mention for this "basic" section of CDI: scope definitions. Though CDI comes with a handful of standard scopes, they're defined in a way that users can use. You could, for example, make a @InvoicingScope to cover beans that exist for the duration of a billing process, and then you'd managed initiating and terminating the scope yourself. Usually, this isn't necessary or particularly useful, but it's good to know it's there.

Producer Methods

The next level of this is the ability of a bean to programmatically produce beans for downstream use. By this I mean that a bean's method can be annotated with @Produces, and then it can provide a type to be matched elsewhere. In the OpenNTF app, I use this as a way to delay loading of a resource bundle until it's actually used:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
package bean;

import java.util.ResourceBundle;

import jakarta.enterprise.context.RequestScoped;
import jakarta.enterprise.inject.Produces;
import jakarta.inject.Inject;
import jakarta.inject.Named;
import jakarta.servlet.http.HttpServletRequest;

@RequestScoped
public class TranslationBean {
    @Inject
    HttpServletRequest request;

    @Produces @Named("translation")
    public ResourceBundle getTranslation() {
        return ResourceBundle.getBundle("translation", request.getLocale()); //$NON-NLS-1$
    }
}

Here, TranslationBean itself exists as a request-scoped bean and can be used programmatically, but it's really a shell for delayed retrieval of a ResourceBundle named "translation" for use in the UI. This allows me to use the built-in mapping behavior of ResourceBundle in Expression Language when writing bits of JSP like <p>${translation.copyright}</p>.

You can get more complicated than this, for sure. For example, if I switch the UI of this app to XPages, I may do a replacement of my classic controller framework that uses such a producer bean instead of the ViewHandler I used in the original implementation.

Proxy Beans

Finally, I'll talk a bit about dynamically-created proxy beans.

CDI's implementations make heavy use of object proxies to do their work. Technically, injected objects are proxies themselves, which allows CDI to let you do stuff like inject a @RequestScoped bean into an @ApplicationScoped one. But the weird part of CDI I plan to talk about here is the use of proxies to provide an object for an interface that doesn't have any implementation class.

I've mentioned this sort of injection a few times:

1
2
3
4
5
6
@Path("/pages")
public class PagesController {
    @Inject
    Page.Repository pageRepository;

    // snip

And then the interface is just:

1
2
3
4
@RepositoryProvider("homeRepository")
public interface Repository extends DominoRepository<Page, String> {
    Optional<Page> findBySubject(String subject);
}

There's no class that implements Page.Repository, so how come you can call methods on it? That's where the proxying comes in. While the CDI container (in this case, our NSF-based app) is being initialized, the Domino JNoSQL driver looks for classes implementing DominoRepository:

1
2
3
4
5
6
7
8
9
<T extends DominoRepository> void onProcessAnnotatedType(@Observes final ProcessAnnotatedType<T> repo) {
    Class<T> javaClass = repo.getAnnotatedType().getJavaClass();
    if (DominoRepository.class.equals(javaClass)) {
        return;
    }
    if (DominoRepository.class.isAssignableFrom(javaClass) && Modifier.isInterface(javaClass.getModifiers())) {
        crudTypes.add(javaClass);
    }
}

Then, once they're all found, it registers a special kind of bean for them:

1
2
3
void onAfterBeanDiscovery(@Observes final AfterBeanDiscovery afterBeanDiscovery, final BeanManager beanManager) {
    crudTypes.forEach(type -> afterBeanDiscovery.addBean(new DominoRepositoryBean(type, beanManager)));
}

I mentioned above that beans are generally just normal Java classes, but you can also make beans by implementing jakarta.enterprise.inject.spi.Bean, which gives you programmatic control over many aspects of the bean, including providing the actual implementation of them. In the Domino driver's case, as in most/all of the JNoSQL drivers, this is done by providing a proxy object:

1
2
3
4
5
6
7
public DominoRepository<?, ?> create(CreationalContext<DominoRepository<?, ?>> creationalContext) {
    DominoTemplate template = /* Instance of a DominoTemplate, which handles CRUD operations */;
    Repository<Object, Object> repository = /* JNoSQL's default Repository */;

    DominoDocumentRepositoryProxy<DominoRepository<?, ?>> handler = new DominoDocumentRepositoryProxy<>(template, this.type, repository);
    return (DominoRepository<?, ?>) Proxy.newProxyInstance(type.getClassLoader(), new Class[] { type }, handler);
}

Finally, that proxy class implements java.lang.reflect.InvocationHandler, which lets it provide custom handling of incoming methods.

This well goes deep, including the way JNoSQL will parse out method names and parameters to handle queries, but I think that will suffice for now. The important thing to know is that this is possible to do, common in underlying frameworks, and fairly rare in application code.

Next Up

I'm winding down on major topics, but at least critical one remains: the actual UI. Currently (and likely when shipping), the app uses MVC and JSP to cover this need. I've discussed these before, but I think it'll be useful to do so again, both as a refresher and to show how they bring these other parts of the app together.

Rewriting The OpenNTF Site With Jakarta EE: Data Access

Jun 21, 2022, 10:12 AM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

In my last post, I talked about how I make use of Jakarta REST to handle the REST services in the new OpenNTF site I'm working on. There'll be more to talk about on that front when I get to the UI and my use of MVC. For now, though, I'll dive a bit into how I'm accessing NSF data.

I've been talking a lot lately about how I've been fleshing out the Jakarta NoSQL driver for Domino that comes as part of the XPages JEE project, and specifically how writing this app has proven to be an ideal impetus for adding specific capabilities that are needed for working with Domino. This demonstrates some of the fruit of that labor.

Model Objects

There are a few ways to interact with Jakarta NoSQL, and they vary a bit by database type (key/value, column, document, graph), but I focus on using the Repository interface capability, which is a high-level abstraction over the pool of documents.

Before I get to that, though, I'll start with an entity object. Part of the heavy lifting that a framework like Jakarta NoSQL does is to map between a Java class and the actual data representation. In the SQL world, one would likely come across the term object-relational mapping for this, and the concept is generally the same. The project currently has a handful of such classes, and so the data layer looks like this:

Screenshot of Designer showing the data-related classes in the NSF

The mechanism for mapping a class in JNoSQL is very similar to JPA:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@Entity("Release")
public class ProjectRelease {
    
    public enum ReleaseStatus {
        Yes, No
    }
    
    @Id
    private String documentId;
    @Column("ProjectName")
    private String projectName;
    @Column("ReleaseNumber")
    private String version;
    @Column("ReleaseDate")
    private Temporal releaseDate;
    @Column("WhatsNewAbstract")
    private String description;
    @Column("DownloadsRelease")
    private int downloadCount;
    @Column("MainID")
    private String mainId;
    @Column("ReleaseInCatalog")
    private ReleaseStatus releaseStatus;
    @Column("DocAuthors")
    private List<String> docAuthors;
    @Column(DominoConstants.FIELD_ATTACHMENTS)
    private List<EntityAttachment> attachments;

    /* getters/setters and utility methods here */
}

@Entity("Release") at the top there declares that this class is a JNoSQL entity, and then the Domino driver uses "Release" as the form name when creating documents and performing queries.

The @Id and @Column("...") annotations map Java object properties to fields and attributes on the document. @Id populates the field with the document's UNID, while @Column does a named field. There's a special one there - @Column(DominoConstants.FIELD_ATTACHMENTS) - that will populate the field with references to the document's attachments when present. In each of these cases, all of the heavy lifting is done by the driver: there's no code in the app that manually accesses documents or views.

Repositories

The way I get access to documents mapped by these classes is to use the JNoSQL Repository mechanism, by way of the extended DominoRepository interface. They look like this (used here as an inner class for stylistic reasons, not technical ones):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
@Entity("Release")
public class ProjectRelease {

    @RepositoryProvider("projectsRepository")
    public interface Repository extends DominoRepository<ProjectRelease, String> {
        Stream<ProjectRelease> findByProjectName(String projectName, Sorts sorts);

        @ViewEntries("ReleasesByDate")
        Stream<ProjectRelease> findRecent(Pagination pagination);
        
        @ViewDocuments("IP Management\\Pending Releases")
        Stream<ProjectRelease> findPendingReleases();
    }

    /* snip: entity class from above */
}

Merely by creating this interface, I'm able to get access to the associated documents: I don't actually have to implement it myself. As seen in the last post, these interfaces can be injected into a bean or REST resource using CDI:

1
2
3
4
5
6
7
public class IPProjectsResource {
    
    @Inject
    private ProjectRelease.Repository projectReleases;

    /* snip */
}

Naturally, there is implementation code for this repository, but it's all done with what amounts to "Java magic": proxy objects and CDI. That's a huge topic on its own, and it's pretty weird to realize that that's even possible, but it will have to suffice for now to say that it is possible and it works great.

When you create one of these repositories, you get basic CRUD capabilities "for free": you can create new documents, look up existing documents by ID, and modify or delete existing documents.

Basic Queries

Beyond that, JNoSQL will do some lifting for you to give sensical implementations for methods based on their method signature in the absence of any driver-specific code. I'm making use of that here with findByProjectName(String projectName, Sorts sorts). The proxy object that provides this implementation is able to glean that String projectName refers to the projectName field of the ProjectRelease class, which is then mapped by annotation to the ProjectName item on the back end. The Sorts object is a JNoSQL type that allows you to specify one or more sort columns and their orders. When executed, this is translated to a DQL query like:

1
Form = 'ProjectRelease' and ProjectName = 'Some Project'

When Sorts are specified, this is also run through QueryResultsProcessor to create a QRP view with the given sort columns in a local temp database. Thanks to that, running the same query multiple times when the data hasn't changed will be very speedy.

You can customize these queries further by adding more parameters, or by using the @Query annotation to provide a SQL-like query with parameters.

Domino-Specific Queries

Since Domino is so view-heavy and DQL+QRP isn't quite at the level where you can just throw any old query+extraction at it and expect it to perform well, it made sense for me to add extensions to JNoSQL to explicitly target views as sources. I use them both here, in one case to efficiently retrieve view data without opening documents and in another in order to piggyback on an existing view used by the IP Tools services already deployed.

The @ViewEntries("ReleasesByDate") annotation causes the findRecent annotation to skip JNoSQL's normal interpretation of the method and instead be handled by the Domino driver directly. It will open that view and read entries based on the Pagination rules sent to it (another JNoSQL object). Since the columns in this view line up to the item names in the documents, I'm able to get useful entity objects out if it without having to actually crack open the docs. In practice, I'll need to be careful when using this so as to not save entities like this back into the database, since not ALL columns are present in the view, but that's a reasonable caveat to have.

The @ViewDocuments("IP Management\\Pending Releases") annotation causes findPendingReleases to read full documents out of the named view, ignoring view columns. Eventually, I'll likely replace this with an equivalent query in JNoSQL's dialect, but for now it's more practical to just use the existing view like a stored query and not have to translate the selection formula to another mechanism.

Repository Provider

The last thing to touch on with this repository is the @RepositoryProvider annotation. The OpenNTF web site is stored in its own NSF, and then references several other NSFs, such as the projects DB, the blog DB (which is still based on BlogSphere), and the patron directory. The @RepositoryProvider annotation allows me to tell JNoSQL to use a different database than the current one, and it does so by finding a matching CDI producer method that gives it a lotus.domino.Database housing the documents and a high-privilege lotus.domino.Session to create QRP views. In this app's case, that's this in another bean:

1
2
3
4
5
6
7
8
@Produces
@jakarta.nosql.mapping.Database(value = DatabaseType.DOCUMENT, provider = "projectsRepository")
public DominoDocumentCollectionManager getProjectsManager() {
    return new DefaultDominoDocumentCollectionManager(
        () -> getProjectsDatabase(),
        () -> getSessionAsSigner()
    );
}

I'll touch on what the heck a @Produces method is in CDI later, but for now you can take it for granted that this works. The getProjectsDatabase() method that it calls is a utility method that opens the project DB based on some configuration documents.

I'll note with no small amount of pleasure that this bean that provides databases is one of the only two places in the app that actually reference Domino API classes at all, and the other instance is just to convert Notes names. I'm considering ways to remove this need as well, perhaps making it so that this producer only needs to provide a path to the target database and the name of a high-privilege user to act as, and then the driver would do the session creation and DB opening itself.

Next Up

In the next post, I'll most likely talk about my use of CDI to handle the "managed beans" layer. In a lot of ways, that will just be demonstrating the way CDI makes the tasks you'd otherwise accomplish with XPages Managed Beans simpler and more code-focused, but (as the @Produces annotation above implies) there's a lot more to it.

Rewriting The OpenNTF Site With Jakarta EE: REST

Jun 20, 2022, 1:09 PM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

In deciding how to kick off implementation specifics of my new OpenNTF site project, I had a few options, and none of them perfect. I considered starting with the managed beans via CDI, but most of those are actually either UI support beans or interact primarily with other components. I ended up deciding to talk a bit about the REST services in the app, since those are both an extremely-common task to perform in XPages and one where the JEE project runs laps around what you get by default from Domino.

The REST layer is handled by Jakarta REST, which is still primarily called by its old name JAX-RS. JAX-RS has existed in Domino for a good while via the Wink implementation included with the Extension Library, but that's a much-older version. Additionally, that implementation didn't include a lot of convenience features like automatic JSON conversion out of the box. The implementation in the XPages JEE Support project uses RESTEasy, which is one of the primary active implementations and covers the latest versions of the spec.

Example

Though the primary way JAX-RS is actually used in this app is as the backbone for the UI with MVC, that'll be a topic for later. Since I also plan to use this as a way to modernize the IP Management tools I wrote, I'm making some JSON-based services for that.

I have a service that lets me get a list of project releases that haven't yet been approved, as well as an endpoint to mark one as approved. That class looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
package webapp.resources.iptools;

import java.text.MessageFormat;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;

import jakarta.annotation.security.RolesAllowed;
import jakarta.inject.Inject;
import jakarta.validation.constraints.NotEmpty;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.projects.ProjectRelease;

@Path("iptools/projects")
@RolesAllowed("[IPManager]")
public class IPProjectsResource {
    
    @Inject
    private ProjectRelease.Repository projectReleases;
    
    @GET
    @Path("pendingReleases")
    @Produces(MediaType.APPLICATION_JSON)
    public Map<String, Object> getPendingReleases() {
        return Collections.singletonMap("payload", projectReleases.findPendingReleases().collect(Collectors.toList()));
    }
    
    @POST
    @Path("releases/{documentId}/approve")
    @Produces(MediaType.APPLICATION_JSON)
    public boolean approveRelease(@PathParam("documentId") @NotEmpty String documentId) {
        ProjectRelease release = projectReleases.findById(documentId)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Could not find project for UNID {0}", documentId)));
        release.markApprovedForCatalog(true);
        projectReleases.save(release);
        
        return true;
    }
}

We can ignore the ProjectRelease.Repository business, since that's the model objects making use of Jakarta NoSQL - that'll be for later. For now, we can just assume that methods like findPendingReleases and findById do what you might assume based on their names.

The resource as a whole is marked as available at the path iptools/projects. In an NSF, that will resolve to a path on the server like /foo.nsf/xsp/app/iptools/projects. The "app" part there is customizable, though the "xsp" part is unchangeable, at least for now: it's the way the XPages stack notices that it's supposed to handle this URL instead of passing it to the classic Domino web server side.

The @RolesAllowed annotation allows me to restrict use of all the methods in this resource to specific roles or names/globs from the ACL. Though the underlying documents will still be protected by the ACL and reader/author fields, it's still good practice to not make services publicly available unless there's a reason to do so.

Often, a resource class like this will have a method marked with @GET but no @Path annotation, which would match the base URL from the class level. That isn't the case here, though: I may eventually merge these methods into an overall projects API, but for now I'm mirroring the old one I made, which doesn't have that.

JSON Conversion

The getPendingReleases method shows off a nice advantage over the older way I was doing this. In the original app, I had a utility class that used Gson to process arbitrary objects and convert them to JSON. Here, since I'm working on top of the whole JEE framework, I don't have to care about that in the app. I can just return my payload object and know that the scaffolding beneath me will handle the fiddly details of translating it to JSON for the browser, based on the @Produces(MediaType.APPLICATION_JSON) annotation there. It happens to use Jakarta JSON Binding (JSON-B), but I don't have to know that. I can just be confident that it will emit JSON representing the documents in a predictable way.

Entity Manipulation

The approveRelease method is available with a URL like /foo.nsf/xsp/app/iptools/projects/releases/12345678901234567890123456789012/approve. With the UNID from the path, I call projectReleases.findById to find the release document with that ID. That method returns an Optional<ProjectRelease> to cover the case that it doesn't exist - the orElseThrow method of Optional allows me to "unwrap" it when present or otherwise throw a NotFoundException. In turn, that exception (part of JAX-RS) will be translated to an HTTP 404 response with the provided message.

I used a @NotEmpty annotation on the @PathParam parameter here since this would currently also match a URL like /foo.nsf/xsp/app/iptools/projects/releases//approve. While I could check for an empty ID, this is a little cleaner and can provide a better error message to the calling user. That's just another nice way to make use of the underlying stack to get better behavior with less code.

The markApprovedForCatalog method on the model object just handles setting a couple fields:

1
2
3
4
5
6
7
8
public void markApprovedForCatalog(boolean approved) {
    if(approved) {
        this.releaseStatus = ReleaseStatus.Yes;
        this.docAuthors = Arrays.asList(ROLE_ADMIN);
    } else {
        this.releaseStatus = ReleaseStatus.No;
    }
}

Then projectReleases.save(release) will store the document in the NSF, throwing an exception in the case of any validation failures. Like with the @NotEmpty parameter annotation above, I don't have to worry about handling that explicitly: Jakarta NoSQL will handle that implicitly for me, since it works with the Bean Validation spec the same way JAX-RS does.

Next Components

Next time I write about this, I figure I'll go over the specific NoSQL entities I've set up and discuss how they handle data access for the app. That will be similar to a number of my recent posts, but I think it'll be helpful to have an example of using that in practice rather than just talking about it hypothetically.

Rewriting The OpenNTF Site With Jakarta EE, Part 1

Jun 19, 2022, 10:13 AM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans
  5. Rewriting The OpenNTF Site With Jakarta EE: UI

The design for the OpenNTF home page has been with us for a little while now and has served us pretty well. It looks good and covers the bases it needs to. However, it's getting a little long in the tooth and, more importantly, doesn't cover some capabilities that we're thinking of adding.

While we could potentially expand the current one, this provides a good opportunity for a clean start. I had actually started taking a swing at this a year and a half ago, taking the tack that I'd make a webapp and deploy it using the Domino Open Liberty Runtime. While that approach would put all technologies on the table, it'd certainly be weirder to future maintainers than an app inside an NSF (at least for now).

So I decided in the past few weeks to pick the project back up and move it into an NSF via the XPages Jakarta EE Support project. I can't say for sure whether I'll actually complete the project, but it'll regardless be a good exercise and has proven to be an excellent way to find needed features to implement.

I figure it'll also be useful to keep something of a travelogue here as I go, making posts periodically about what I've implemented recently.

The UI Toolkit

The original form of this project used MVC and JSP for the UI layer. Now that I was working in an NSF, I could readily use XPages, but for now I've decided to stick with the MVC approach. While it will make me have to solve some problems I wouldn't necessarily have to solve otherwise (like file uploads), it remains an extremely-pleasant way to write applications. I am also not constrained to this: since the vast majority of the logic is in Java beans and controller classes, switching the UI front-end would not be onerous. Also, I could theoretically mix JSP, JSF, XPages, and static HTML together in the app if I end up so inclined.

In the original app (as in this blog), I made use of WebJars to bring in JavaScript dependencies, namely Hotwire Turbo to speed up in-site navigation and use Turbo Frames. Since the NSF app in Designer doesn't have the Maven dependency mechanism the original app did, I just ended up copying the contents of the JAR into WebContent. That gave me a new itch to scratch, though: I'd love to be able to have META-INF/resources files in classpath JARs picked up by the runtime and made available, lowering the number of design elements present in the NSF.

The Data Backend

The primary benefit of this project so far has been forcing me to flesh out the Jakarta NoSQL driver in the JEE support project. I had kind of known hypothetically what features would be useful, but the best way to do this kind of thing is often to work with the tool until you hit a specific problem, and then solve that. So far, it's forced me to:

  • Implement the view support in my previous post
  • Add attachment support for documents, since we'll need to upload and download project releases
  • Improve handling of rich text and MIME, though this also has more room to grow
  • Switched the returned Streams from the driver to be lazy loading, meaning that not all documents/entries have to be read if the calling code stops reading the results partway through
  • Added the ability to use custom property types with readers/writers defined in the NSF

Together, these improvements have let me have almost no lotus.domino code in the app. The only parts left are a bean for formatting Notes-style names (which I may want to make a framework service anyway) and a bean for providing access to the various associated databases used by the app. Not too shabby! The app is still tied to Domino by way of using the Domino-specific extensions to JNoSQL, but the programming model is significantly better and the amount of app code was reduced dramatically.

Next Steps

There's a bunch of work to be done. The bulk of it is just implementing things that the current XPages app does: actually uploading projects, all the stuff like discussion lists, and so forth. I'll also want to move the server-side component of the small "IP Tools" suite I use for IP management stuff in here. Currently, that's implemented as Wink-based JAX-RS resources inside an OSGi bundle, but it'll make sense to move it here to keep things consolidated and to make use of the much-better platform capabilities.

As I mentioned above, I can't guarantee that I'll actually finish this project - it's all side work, after all - but it's been useful so far, and it's a further demonstration of how thoroughly pleasant the programming model of the JEE support project is.

Per-NSF-Scoped JWT Authorization With JavaSapi

Jun 4, 2022, 10:35 AM

Tags: domino dsapi java
  1. Poking Around With JavaSapi
  2. Per-NSF-Scoped JWT Authorization With JavaSapi
  3. WebAuthn/Passkey Login With JavaSapi

In the spirit of not leaving well enough alone, I decided the other day to tinker a bit more with JavaSapi, the DSAPI peer tucked away undocumented in Domino. While I still maintain that this is too far from supported for even me to put into production, I think it's valuable to demonstrate the sort of thing that this capability - if made official - would make easy to implement.

JWT

I've talked about JWT a bit before, and it was in a similar context: I wanted to be able to access a third-party API that used JWT to handle authorization, so I wrote a basic library that could work with LS2J. While JWT isn't inherently tied to authorization like this, it's certainly where it's found a tremendous amount of purchase.

JWT has a couple neat characteristics, and the ones that come in handy most frequently are a) that you can enumerate specific "claims" in the token to restrict what the token allows the user to do and b) if you use a symmetric signature key, you can generate legal tokens on the client side without the server having to generate them. "b" there is optional, but makes JWT a handy way to do a quick shared secret between servers to allow for trusted authentication.

It's a larger topic than that, for sure, but that's the quick and dirty of it.

Mixing It With An NSF

Normally on Domino, you're either authenticated for the whole server or you're not. That's usually fine - if you want to have a restricted account, you can specifically grant it access to only a few NSFs. However, it's good to be able to go more fine-grained, restricting even powerful accounts to only do certain things in some contexts.

So I had the notion to take the JWT capability and mix it with JavaSapi to allow you to do just that. The idea is this:

  1. You make a file resource (hidden from the web) named "jwt.txt" that contains your per-NSF secret.
  2. A remote client makes a request with an Authorization header in the form of Bearer Some.JWT.Here
  3. The JavaSapi interceptor sees this, checks the target NSF, loads the secret, verifies it against the token, and authorizes the user if it's legal

As it turns out, this turned out to be actually not that difficult in practice at all.

The main core of the code is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public int authenticate(IJavaSapiHttpContextAdapter context) {
    IJavaSapiHttpRequestAdapter req = context.getRequest();

    // In the form of "/foo.nsf/bar"
    String uri = req.getRequestURI();
    String secret = getJwtSecret(uri);
    if(StringUtil.isNotEmpty(secret)) {
        try {
            String auth = req.getHeader("Authorization"); //$NON-NLS-1$
            if(StringUtil.isNotEmpty(auth) && auth.startsWith("Bearer ")) { //$NON-NLS-1$
                String token = auth.substring("Bearer ".length()); //$NON-NLS-1$
                Optional<String> user = decodeAuthenticationToken(token, secret);
                if(user.isPresent()) {
                    req.setAuthenticatedUserName(user.get(), "JWT"); //$NON-NLS-1$
                    return HTEXTENSION_REQUEST_AUTHENTICATED;
                }
            }
        } catch(Throwable t) {
            t.printStackTrace();
        }
    }

    return HTEXTENSION_EVENT_DECLINED;
}

To read the JWT secret, I used IBM's NAPI:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
private String getJwtSecret(String uri) {
    int nsfIndex = uri.toLowerCase().indexOf(".nsf"); //$NON-NLS-1$
    if(nsfIndex > -1) {
        String nsfPath = uri.substring(1, nsfIndex+4);
        
        try {
            NotesSession session = new NotesSession();
            try {
                if(session.databaseExists(nsfPath)) {
                    // TODO cache lookups and check mod time
                    NotesDatabase database = session.getDatabase(nsfPath);
                    database.open();
                    NotesNote note = FileAccess.getFileByPath(database, SECRET_NAME);
                    if(note != null) {
                        return FileAccess.readFileContentAsString(note);
                    }
                }
            } finally {
                session.recycle();
            }
        } catch(Exception e) {
            e.printStackTrace();
        }
    }
    return null;
}

And then, for the actual JWT handling, I use the auth0 java-jwt library:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
public static Optional<String> decodeAuthenticationToken(final String token, final String secret) {
	if(token == null || token.isEmpty()) {
		return Optional.empty();
	}
	
	try {
		Algorithm algorithm = Algorithm.HMAC256(secret);
		JWTVerifier verifier = JWT.require(algorithm)
		        .withIssuer(ISSUER)
		        .build();
		DecodedJWT jwt = verifier.verify(token);
		Claim claim = jwt.getClaim(CLAIM_USER);
		if(claim != null) {
			return Optional.of(claim.asString());
		} else {
			return Optional.empty();
		}
	} catch (IllegalArgumentException | UnsupportedEncodingException e) {
		throw new RuntimeException(e);
	}
}

And, with that in place, it works:

JWT authentication in action

That text is coming from a LotusScript agent - as I mentioned in my original JavaSapi post, this authentication is trusted the same way DSAPI authentication is, and so all elements, classic or XPages, will treat the name as canon.

Because the token is based on the secret specifically from the NSF, using the same token against a different NSF (with no JWT secret or a different one) won't authenticate the user:

JWT ignored by a different endpoint

If we want to be fancy, we can call this scoped access.

This is the sort of thing that makes me want JavaSapi to be officially supported. Custom authentication and request filtering are much, much harder on Domino than on many other app servers, and JavaSapi dramatically reduces the friction.

XPages Jakarta EE 2.5.0 And The Looming Java-Version Wall

May 25, 2022, 2:41 PM

Earlier today, I published version 2.5.0 of the XPages Jakarta EE Support project. It's mostly a consolidation and bug-fix release, but there are few interesting features and notes about the implementation. Plus, as teased in the post title up there, there's a looming problem for the project.

New Features

There are two main new features in this version.

First, I added some configurable CORS support for REST services. Fortunately for me, RestEasy comes with a CORS filter by default, and it just needs to be enabled. I wired it up using MicroProfile Config to read some values out of xsp.properties:

1
2
3
4
5
6
7
8
rest.cors.enable=true                   # required for CORS
rest.cors.allowCredentials=true         # defaults to true
rest.cors.allowedMethods=GET,HEAD       # defaults to all
rest.cors.allowedHeaders=Some-Header    # defaults to all
rest.cors.exposedHeaders=Some-Header    # optional
rest.cors.maxAge=600                    # optional
# allowedOrigins is required, and can be "*"
rest.cors.allowedOrigins=http://foo.com,http://bar.com

I also added support for using the long-standing @WebServlet annotation. Though REST services will generally do what you want, sometimes it's handy to use the lower-level Servlet capability, and now you can do so inline:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@WebServlet(urlPatterns = { "/someservlet", "/someservlet/*", "*.hello" })
public class ExampleServlet extends HttpServlet {
	private static final long serialVersionUID = 1L;
	
	@Inject
	ApplicationGuy applicationGuy;

	@Override
	protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
		resp.setContentType("text/plain");
		resp.getWriter().println("Hello from ExampleServlet. context=" + req.getContextPath() + ", path=" + req.getServletPath() + ", pathInfo=" + req.getPathInfo());
		resp.getWriter().println("ApplicationGuy: " + applicationGuy.getMessage());
		resp.getWriter().flush();
	}
}

Consolidation

There were a couple specs where I had previously either copied the source into the repository (CDI, Mail) or had maintained a local branch fork (NoSQL). Those were always uncomfortable concessions to reality, but I decided to look further into ways to handle that.

For NoSQL, part of it was what I talked about in my last post: using Eclipse Transformer to make use of javax.* compiled binaries and source converted to jakarta.* automatically. But beyond that, it had the same problem that I had forked Mail for. Namely, it hits the same trouble that lots of non-OSGi code does in an OSGi context, where it uses ServiceLoader in a non-extensible way. Though I have an open PR to make use of the pseudo-standard "HK2" ServiceLoader provider, waiting for that would mean continuing the local-build trouble.

Instead, for all of these cases I made use of OSGi's Weaving capability to re-write those parts of the class files on the fly. While this is a bit unfortunate, it works well in practice. The only real down side for now is having to be a bit more careful when bumping the versions in the future, but this type of code changes very rarely.

The Looming Wall

While this has been going swimmingly, I've started to hit some real impediments with Domino's Java version. The next release of Jakarta EE, version 10, requires Java 11 as a minimum. This is similar to the move Equinox (Domino's OSGi framework of choice) made just under two years ago, and which has itself bitten me with a blockage to upgrading Tycho to version 2.0 and above. Java 11 is about four years old now, and is no longer even the latest LTS release, so this all makes sense.

I've known this was coming for a while, but incompatible versions of JEE specs and implementations started to trickle in over the past year, leading to me leaving notes for myself about maximum versions. JEE 10 itself is fairly imminent now, so I'll be capped at the ones released with JEE 9 a while ago.

So I've been pondering my options here.

In one sense, I solved this problem years ago. The Domino Open Liberty Runtime project has had the ability to download any version of open-source Java that you want, and I expanded it last year to let you pick from several common flavors. Liberty maintains a breathless pace of advancement, adding official support for Java 18 the month after it came out. If one wants to run JEE apps on Domino, that's the most complete way. However, though it does its job technologically well, it's not exactly a natural fit for Domino developers in its current state.

But I've been considering anew a notion I had years ago, which is to write an extension for Liberty so that it reads class files and resources out of an NSF directly. In some early investigation a bit ago, this started to appear quite doable. In theory, I could write an adapter that would take an incoming request for "foo.nsf" and then read files out of the NSF in the same way XPages does, but instead feeding them to Liberty's runtime. Doing this would essentially implement all remaining JEE and MicroProfile specs in one fell swoop on top of the "any Java version" support, but would add the fault-prone attribute of running a separate process and proxying requests to it. In practice, that setup has proven itself good, but it's certainly more complicated than the "single process on port 80" deal that Domino's HTTP is now.

That route also wouldn't inherently support XPages, which would be something of an impediment to the XPages JEE project's original remit. That's something I've also pondered, and in theory I could make an auto-vivifying version of the XPages Runtime project that grabs all the pertinent XPages bundles from the current server and patches them into the Liberty server as an extension feature, similar to how all the built-in Liberty features work. This could be done, but I'll admit that I balk a bit at the prospect. Though I run XPages outside Domino constantly, it's with full knowledge of the tradeoffs and special considerations. Getting a normal NSF-based XPages app to run in this way would take some additional work.

Anyway, those options could work, but none of them are great. The true fix would naturally be for HCL to move to a newer Java version in Domino's HTTP stack, but I don't control that, so I'll content myself with considering what to do in the mean time. Admittedly, pondering this sort of thing is enjoyable in its own right. Also fortunately, even without tackling this, there's still plenty of stuff in the pile for me to tackle as the fancy strikes me.

Putting Eclipse Transformer To Use In Dependency Wrangling

May 24, 2022, 3:46 PM

Tags: jakartaee java

Setting code aside, the backbone of the XPages Jakarta EE Support project is its dependency pool. In it, I use my fork of the p2-maven-plugin to wrangle all the spec and implementation dependencies. Aside from just collecting them, this file does a ton of work to create and reconfigure their OSGi bundle rules to get everything working on Domino.

There have been limitations, though, and some of them have to do with the Jakarta NoSQL project. Though there are side branches of that project using the jakarta.* namespace, the main master branch is still on javax.* for a couple Jakarta depenencies. Historically, I've dealt with this by running a build locally and deploying it to OpenNTF's Maven server. However, this adds a bit of randomness to the mix: if a snapshot build of NoSQL goes out to the main repository that happens to be newer, then building the dependency repository locally might pick up on that instead, since it's named the same thing.

Transformer

Fortunately, IBM wrote the solution for me: Eclipse Transformer. This Transformer is a rules engine to translate files (Java and related resources, namely) based on configuration - and, while it's generic, it's really designed for the transition from javax.* to jakarta.* namespaces.

It allows you to do these transformations at runtime or (as I'll be doing here) ahead of time, even if you don't have access to the original source. Though I do have access to the source, it's more useful at the moment to act like I don't.

I'd known about the tool and have seen how it's used heavily by both app servers and implementation vendors to be able to support both old- and new-style uses, and so I've kept it in mind for in case the need ever came up. It's a perfect fit for this.

p2-maven-plugin

I considered a couple ways to handle this, but realized the cleanest for now would be to integrate it into the dependency pool generator that I already have, since it fits right in with the OSGi transformations I'm doing.

So I went on over to the p2-maven-plugin fork and got to work. When defining Maven artifacts to bring in, the format looks like this:

1
2
3
4
<artfiact>
    <id>jakarta.servlet:jakarta.servlet-api:4.0.4</id>
    <source>true</source>
</artfiact>

Now, Servlet already has a jakarta.* version, but it'll be useful here as an example that avoids the other transformations I'm doing.

My addition is to add a transform configuration option here, with jakarta as the only value for now:

1
2
3
4
5
<artfiact>
    <id>jakarta.servlet:jakarta.servlet-api:4.0.4</id>
    <source>true</source>
    <transform>jakarta</transform>
</artfiact>

...and that'll be it! When that is specified, the code will now run the artifact and its source JAR transparently through Transformer and the version you get in your p2 repository will reflect the transition. And, well, it works perfectly in my case. The resultant NoSQL spec and dependencies are functionally equivalent to the ones in the jakarta.* source branch, but without having to actually change the source files yet. Neat.

Implementation

Though it took a bit to track down the best way to do it, it turned out that Transformer is quite easy to embed into a Java app like the Maven plugin. The majority of the code ends up being effectively Java boilerplate to provide the default values for Jakarta transformation. Truncated, it looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
String inputFileName = t.getAbsolutePath(); // the artifact in ~/.m2/repository
File dest = File.createTempFile(t.getName(), ".jar"); //$NON-NLS-1$
String outputFileName = dest.getAbsolutePath();

Map<String, String> optionDefaults = JakartaTransform.getOptionDefaults();
Function<String, URL> ruleLoader = JakartaTransform.getRuleLoader();
TransformOptions options = /* build TransformOptions object that reads the above variables */

Transformer transformer = new Transformer(logger, options);
ResultCode result = transformer.run();
switch(result) {
case ARGS_ERROR_RC:
case FILE_TYPE_ERROR_RC:
case RULES_ERROR_RC:
case TRANSFORM_ERROR_RC:
	throw new IllegalStateException("Received unexpected result from transformer: " + result);
case SUCCESS_RC:
default:
	return dest;
}

There are plenty of options to specify, but that's really about it. Once given the Jakarta defaults, it will do the right thing in the normal case, both for the compiled class files as well as the source JAR.

I'm not sure if I'll need it in other cases (NoSQL will move over in the main branch eventually), but it's sure handy here and should be useful in a pinch. From time to time, I've run across dependencies that would be useful to include but use old JEE specs, and this could do the trick in those cases too.

Poking Around With JavaSapi

May 19, 2022, 4:49 PM

Tags: dsapi java
  1. Poking Around With JavaSapi
  2. Per-NSF-Scoped JWT Authorization With JavaSapi
  3. WebAuthn/Passkey Login With JavaSapi

Earlier this morning, Serdar Basegmez and Karsten Lehmann had a chat on Twitter regarding the desire for OAuth on Domino and their recollections of a not-quite-shipped technology from a decade ago going by the name "JSAPI".

Seeing this chat go by reminded me of some stuff I saw when I was researching the Domino HTTP Java entrypoint last year. Specifically, these guys, which have been sitting there since at least 9.0.1:

JavaSapi class files in com.ibm.domino.xsp.bridge.http

I'd made note of them at the time, since there's a lot of tantalizing stuff in there, but had put them back on the shelf when I found that they seemed to be essentially inert at runtime. For all of IBM's engineering virtues (and there are many), they were never very good at cleaning up their half-implemented experiments when it came time to ship, and I figured this was more of the same.

What This Is

Well, first and foremost, it is essentially a non-published experiment: I see no reference to these classes or how to enable them anywhere, and so everything within these packages should be considered essentially radioactive. While they're proving to be quite functional in practice, it's entirely possible - even likely - that the bridge to this side of thing is full of memory leaks and potential severe bugs. Journey at your own risk and absolutely don't put this in production. I mean that even more in this case than my usual wink-and-nod "not for production" coyness.

Anyway, this is the stuff Serdar and Karsten were talking about, named "JavaSapi" in practice. It's a Java equivalent to DSAPI, the API you can hook into with native libraries to perform low-level alterations to requests. DSAPI is neat, but it's onerous to use: you have to compile down to a native library, target each architecture you plan to run on, deploy that to each server, and enable it in the web site config. There's a reason not a lot of people use it.

Our new friend JavaSapi here provides the same sorts of capabilities (rewriting URLs, intercepting requests, allowing for arbitrary user authentication (more on this later), and so forth) but in a friendlier environment. It's not just that it's Java, either: JavaSapi runs in the full OSGi environment provided by HTTP, which means it swims in the same pool as XPages and all of your custom libraries. That has implications.

How To Use It

By default, it's just a bunch of classes sitting there, but the hook down to the core level (in libhttpstack.so) remains, and it can be enabled like so:

set config HTTP_ENABLE_JAVASAPI=1

(strings is a useful tool)

Once that's enabled, you should start seeing a line like this on HTTP start:

[01C0:0002-1ADC] 05/19/2022 03:37:17 PM  HTTP Server: JavaSapi Initialized

Now, there's a notable limitation here: the JavaSapi environment isn't intended to be arbitrarily extensible, and it's hard-coded to only know about one service by default. That service is interesting - it's an OAuth 2 provider of undetermined capability - but it's not the subject of this post. The good news is that Java is quite malleable, so it's not too difficult to shim in your own handlers by writing to the services instance variable of the shared JavaSapiEnvironment instance (which you might have to construct if it's not present).

Once you have that hook, it's just a matter of writing a JavaSapiService instance. This abstract class provides fairly-pleasant hooks for the triggers that DSAPI has, and nicely wraps requests and responses in Servlet-alike objects.

Unlike Servlet objects, though, you can set a bunch of stuff on these objects, subject to the same timing and pre-filtering rules you'd have in DSAPI. For example, in the #rawRequest method, you can add or overwrite headers from the incoming request before they get to any other code:

1
2
3
4
5
public int rawRequest(IJavaSapiHttpContextAdapter context) {
    context.getRequest().setRequestHeader("Host", "foo-bar.galaxia");
        
    return HTEXTENSION_EVENT_HANDLED;
}

If you want to, you can also handle the entire request outright:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public int rawRequest(IJavaSapiHttpContextAdapter context) {
    if(context.getRequest().getRequestURI().contains("foobar")) {
        context.getResponse().setStatus(299);
        context.getResponse().setHeader("Content-Type", "text/foobar");
        try {
            context.getResponse().getOutputStream().print("hi.");
        } catch (IOException e) {
            e.printStackTrace();
        }
        return HTEXTENSION_REQUEST_PROCESSED;
    }
    
    return HTEXTENSION_EVENT_HANDLED;
}

You probably won't want to, since we're not lacking for options when it comes to responding to web requests in Java, but it's nice to know you can.

You can even respond to tell http foo commands:

1
2
3
4
5
6
7
8
9
public int processConsoleCommand(String[] argv, int argc) {
    if(argc > 0) {
        if("foo".equals(argv[0])) { //$NON-NLS-1$
            System.out.println(getClass().getSimpleName() + " was told " + Arrays.toString(argv));
            return HTEXTENSION_SUCCESS;
        }
    }
    return HTEXTENSION_EVENT_DECLINED;
}

So that's neat.

The fun one, as it usually is, is the #authenticate method. One of the main reasons one might use DSAPI in the first place is to provide your own authentication mechanism. I did it years and years ago, Oracle did it for their middleware, and HCL themselves did it recently for the AppDev Pack's OAuth implementation.

So you can do the same here, like this super-secure implementation:

1
2
3
4
public int authenticate(IJavaSapiHttpContextAdapter context) {
    context.getRequest().setAuthenticatedUserName("CN=Hello From " + getClass().getName(), getClass().getSimpleName());
    return HTEXTENSION_REQUEST_AUTHENTICATED;
}

The cool thing is that this has the same characteristics as DSAPI: if you declare the request authenticated here, it will be fully trusted by the rest of HTTP. That means not just Java - all the classic stuff will trust it too:

Screenshot showing JavaSapi authentication in action

Conclusion

Again: this stuff is even further from supported than the usual components I muck around in, and you shouldn't trust any of it to work more than you can actively observe. The point here isn't that you should actually use this, but more that it's interesting what things you can find floating around the Domino stack.

Were this to be supported, though, it'd be phenomenally useful. One of Domino's stickiest limitations as an app server is the difficulty of extending its authentication schemes. It's always been possible to do so, but DSAPI is usually prohibitively difficult unless you either have a bunch of time on your hands or a strong financial incentive to use it. With something like this, you could toss Apache Shiro in there as a canonical source of user authentication, or maybe add in Soteria - the Jakarta Security implementation - to get per-app authentication.

There's also that OAuth 2 thing floating around in there, which does have a usable extension point, but I think it's fair to assume that it's unfinished.

This is all fun to tinker with, though, and sometimes that's good enough.

So Why Jakarta?

Apr 28, 2022, 4:10 PM

Tags: jakartaee java
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. Adding Concurrency to the XPages Jakarta EE Support Project
  11. Adding Transactions to the XPages Jakarta EE Support Project
  12. XPages Jakarta EE 2.9.0 and Next Steps

I've spent a lot of time over the last while tinkering with the XPages Jakarta EE Support project in particular and Jakarta technologies in general, and I figured it'd be worth discussing a bit why I like this stack and why I think it's worth putting work into.

There are a couple facets to this, I think. Why is it good on its own? Why is it good as a complement or replacement for XPages? And why is it good compared to the other roads offered for Domino developers?

Quick Aside: Spring and Others

Before I get much further, I should mention early on that this isn't so much about Jakarta as opposed to technologies like Spring. Spring is good! It's similar in concept, both because it started from a JEE-aligned mindset and now because Jakarta and MicroProfile have been adopting a lot of the best concepts. It's kind of a "D&D and Pathfinder" situation. While there are some philosophical differences, and Jakarta is (now) run by an open-source organization as opposed to an individual company, the distinction for our purposes isn't important.

This also goes for some other technologies that could potentially be slotted in for server-based app dev, like Vert.x. Vert.x, for its part, often serves different purposes, and so that discussion is also separate.

Technical Reasons

Going into all the specific things that I think are good about JEE technologies would be quite an ordeal, so I'll stick to summarizing some overarching themes that I appreciate.

Presumably as a sign of my own ever-increasing age, I appreciate the staid nature of many aspects of it. While some of that comes from the near-stagnation the stack suffered from towards the end of Oracle's sole stewardship of it, it's good that things like Servlet have remained consistent in important ways since the very beginning. Some aspects have come and some will soon go, but the main aspects have remained pleasantly consistent because they were designed to be simple and largely adaptable. Servlet has its limitations, but they're limitations that don't generally show up for normal use.

I also quite appreciate how annotation-based most of the specs are. This was a good way of moving away from the original "pile of XML" configuration process of the early versions of Java EE while still retaining introspection abilities. What I mean by that is the ability of programs (like a server or an IDE) to look at a Jakarta app and glean important information without having to actually execute the code. As a point of comparison, take this hypothetical version of a REST server, where you declare endpoints programmatically:

1
2
3
4
public void initServer(ServerConfig config) {
    config.addHandler("/foo", new FooHandler());
    config.addHandler("/foo/bar", new BarHandler());
}

...and then compare that to the annotation-based way of doing it:

1
2
3
4
5
6
7
8
@Path("/foo")
public class FooHandler {
	/* snip */

	@GET
	@Path("bar")
	public Object getBar() { /* ... */ }
}

Both could be functionally the same at runtime, but the latter allows tools to inspect the classes statically to provide summaries and capabilities in the UI in a way that would be technically possible but much more difficult otherwise. This is certainly not unique to Jakarta, but it's an important feature of it nonetheless.

Moreover, I think that the stack is morphing itself nicely into a cleaner, modern form. It's been a rocky process, but a lot of the individual specs are either adapting themselves onto CDI or using it as the baseline. As much as I sang the praises of Servlet in the earlier paragraph, you can write a thoroughly-capable app using CDI and JAX-RS without ever caring about much else beyond a data layer.

This adaptability is also paying off with newer-era work like Quarkus. Quarkus is an intriguing project that combines slices of Jakarta, MicroProfile, and others with the native-compilation capabilities of GraalVM to provide a toolchain that lets you write quite-efficient compiled apps, targeted primarily for Kubernetes deployments where the startup and response time of a single node is very important. This is really solving a lot of problems I don't have, but it's interesting to watch, and to see how these goals feed back into Jakarta with things like CDI Lite.

Jakarta As An XPages Extension Or Successor

XPages was (and is) a fork of a subset of Java EE, with the split happening somewhere just before 2006's Java EE 5. It's a small subset, but you can look down that list of technologies and see a few that remain to this day: Servlet 2.4/2.5, JSF 1.1/1.2 by way of the XPages fork, JavaMail, and a few other miscellaneous packages. DAS and the Extension Library brought in JAX-RS 1.1, so you can add a dash of 2009's Java EE 6 to the pot.

The XPages Jakarta EE Support project started as a mechanism for bringing in a newer JAX-RS version, followed by CDI to replace managed beans and EL 3 to replace XPages's primordial Expression Language support - essentially, as a slowly-growing platform update. In its current form, it brings in a wide slate of technologies, but the fact that it was starting as an extension of an existing ancient Java EE fork made it possible to do this gradually, piece by piece. Really, up until the move to the jakarta.* namespace, it was a process of just glomming compatible parts onto the existing Servlet baseline.

Even after that switch, the historical alignment with the older parts of the stack makes building on comparatively straightforward. That applies both to me as the person doing the adapting the spec implementations and (hopefully) to a developer actually using them. While XPages predated the annotation-heavy push in JEE as well as CDI entirely, a lot of the core concepts are in common, and I expect that it'd be an easier transition from XPages alone to Jakarta EE than, say, classic Domino web dev to XPages was. It certainly was for me, anyway.

Jakarta As A Cultural Match

This topic covers both my general appreciation for a thoroughly-open-source platform and why I specifically like it in relation to other roads open to Domino developers.

Java EE had for a long time been kind of open source: though Sun and then Oracle held the reins, the specs grew to be free to implement, and over time there flourished a slew of compatible servers, many of which are now or have always been fully open-source.

I like this for a lot of reasons. For one, it's just good as a programmer to have source access. Normally, you can just go by the spec, but having full access to the server's source lets you debug thorny problems when you hit an edge case. While closed-source software certainly has its place, there's just a layer of "all else being equal, source access is better".

Beyond being able to see and debug the source, it's valuable that the platform is open source and the implementations I use are as well, and the whole thing is guided by the extremely-established Eclipse Foundation. While a company handing something over to an open-source organization can sometimes just be a way to usher it to a plausibly-deniable death, the activity around Jakarta EE shows that isn't the case here. While Oracle and IBM still tend to naturally top the charts, it has a diverse pool of contributors, and its fate isn't tied to the interests of a single company. As with closed-source software, sometimes being shepherded by a single company has advantages, but it leaves you more exposed to the winds of their financial incentives.

This all contributes to a platform where I can be comfortable writing a bunch of code with the knowledge that, while it may not be good forever, the path will be at least clear. While there's something of an industry for modernizing old Java EE applications (one our old friend Niklas Heidloff is involved in), it's a task shared by a lot of companies and, indeed, a lot of the work is "replace old vendor-specific code with standards-based code". While nothing can truly prevent you from having a pile of obsolete code other than not writing it in the first place, following a path like this that's shared with a broad slice of the industry is a good way to mitigate the trouble.

And I think that paragraph is what a lot of it comes down to for me. As much as I can be, I want to be out of the business of writing code that doesn't have a built-in "plan B". If Domino magically stopped existing tomorrow, code written in this way wouldn't necessarily directly work elsewhere, but it'd be a much shorter journey than for the Notes-client and classic Domino web code I wrote in my early career. And, really, some of it would directly work elsewhere. The stuff that's just describing a REST entrypoint or a page layout? The stuff that describes the interaction between those and an abstracted data layer? That code doesn't care what your server is, and there's a whole ecosystem of servers ready to do the job. That, there, is what makes this worthwhile for me.

Intercepting Class Loading in OSGi, A Travelogue

Jan 10, 2022, 10:36 AM

Tags: java osgi xpages

Yesterday, I had a problem. I was trying to get MicroProfile Config working inside an NSF to add to the XPages Jakarta EE project, and I was severely blocked by odd behavior.

To describe that, I'll lightly cover what MP Config is. It's a CDI extension that allows you to annotate properties on a bean to indicate that they're intended to come from an available configuration source - often a .properties file in the project, but it's a pluggable system. Your bean will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
package example;

// ...

@ApplicationScoped
public class ConfigExample {
    @Inject
    @ConfigProperty(name="java.version", defaultValue="unknown")
    private String javaVersion;
    
    /* other methods go here */
}

The idea is that you'll then have a properties file or environment variable to fill in the value, allowing you to separate your configuration from the implementation in a consistent way. Here, I'm making use of the fact that a default provider looks up Java system properties, so I could just get it working before investigating adding providers.

Since I'd already added CDI and a CDI-based extension in the form of MVC, I figured this would be easy.

The Problem

The problem I hit, though, was bizarre. CDI would identify the bean above, but would hit this problem:

1
2
3
4
5
org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type String with qualifiers @Default
  at injection point [UnbackedAnnotatedField] @Inject private example.ConfigExample.javaVersion
  at example.ConfigExample.javaVersion(ConfigExample.java:0)
WELD-001475: The following beans match by type, but none have matching qualifiers:
  - Producer Method [String] with qualifiers [@Any @ConfigProperty] declared as [[UnbackedAnnotatedMethod] @Dependent @Produces @ConfigProperty protected io.smallrye.config.inject.ConfigProducer.produceStringConfigProperty(InjectionPoint)]

The gist of this is that it noticed that the javaVersion property is supposed to be an injected property, but it had no idea what the source should be. It did know about the MicroProfile provider, which handles @ConfigProperty, but it couldn't put two and two together.

I banged my head against this for a while, and eventually determined that the class as loaded from the NSF is stripped of the @ConfigProperty annotation outright. Other annotations, such as @Inject and even custom annotations, would remain, but not @ConfigProperty. I wrestled with OSGi dependency chains for a while, to no avail.

The Enemy

Eventually, I found the core, and it was an old nemesis of mine. It's this method in com.ibm.xsp.util.ClassLoaderUtil:

ClassLoaderUtil.checkProhibitedClassNames

This field is called by the ClassLoader used in an NSF to ensure that certain classes, by name prefix, cannot be loaded by code coming from an NSF. The last three lines there make a sort of sense: Domino is supposed to be an app container for XPages apps, and ideally it's not a simple process for an app to break out of its container to muck about in the parent environment. Fair enough. The NAPI line is presumably there because IBM wanted to protect developers from themselves, even though Notes devs had been making unauthenticated calls to C APIs for freaking ever.

It's the first two, and specifically the first, that are the source of my trouble. Those prohibitions are presumably meant to isolate XPages apps from the fact that they live in an OSGi world, with the assumption that anything beginning with org.eclipse. refers to something like org.eclipse.core.runtime, the OSGi system bundle.

And this is the issue. MicroProfile is not in any way related to OSGi, but it sure is an Eclipse project. Accordingly, the class name of @ConfigProperty is org.eclipse.microprofile.config.inject.ConfigProperty, and thus cannot be loaded from an NSF.

Attempted Workarounds

So I considered my options.

One was to fork MP Config and rename the packages. That would work, but it would defeat the portability goals of the XPages JEE project, and would also just be a hassle - I've already had to fork a few specs, and each new one adds to the maintenance burden. That remained an option, but it would be a last resort.

My next idea was to wrap around the ModuleClassLoader class used by NSFComponentModule for class loading purposes. This class is blessedly non-final, and so in theory I could look at the instance in a module and swap it out with a replacement. I tinkered with this a bit, but the trouble became the way it's layered, with a DynamicClassLoader private class within - something harder to subclass. In theory, I could reproduce the behavior of it wholesale, but that would be both fragile (if the implementation ever changes) but also verging on if not outright illegal (it's one thing to be API-compatible, and another to reproduce the internal functionality). After some wrangling, I decided to look elsewhere.

The True Workaround

I realized eventually that I don't really care about ModuleClassLoader as such: it does its job fine, and it's only the response that it gets from ClassLoaderUtil that is the problem. If I could change that, I would be set.

I've used the Javassist project here and there for a long time, ever since its inclusion in ODA for one reason or another. It's a handy toolkit, and notably includes the capability to alter a method implementation on the fly. There's my loophole.

The reason this kind of thing can work is related to how Java handles classes and calls between them. For all intents and purposes, you can consider a method call from one bit of Java to another to be a string-based lookup, saying "find a class named X and a method named Y, and then execute it". The "find" part there is much looser than you might think. It's easy to think of class references like C static linking, but they're really not. When code asks for a class, it asks the context ClassLoader, and that object can do basically whatever the heck it wants to find it, as long as it eventually emits a Class that the runtime can deal with.

Javassist's manipulation makes use of the fact that classes are generally eventually just a bunch of bytes, and you can do whatever you want with a bunch of bytes. Using Javassist, it's fairly simple to, once you have a handle on the class, alter the method. Truncated, that's:

1
2
3
4
5
6
ClassPool pool = /* build a ClassPool that can load the class */;
CtClass cc = /* get the class from the pool */;
cc.defrost();
CtMethod m = cc.getDeclaredMethod("checkProhibitedClassNames");
m.setBody("{ return false; }");
Class<?> result = cc.toClass();

And this works, as far as it goes: I now have a Class version of ClassLoaderUtil that skips the onerous check.

The trouble now was to get this to be actually used by other classes. Generally, once a ClassLoader loads a class, it's difficult to feed it another version unless it's designed to do so: most ClassLoader implementations, including those used here, are designed to read and emit classes by their own rules, not have new data fed into them.

I tried digging through the Eclipse OSGi ModuleClassLoader (distinct from the NSF ModuleClassLoader) for entrypoints and had some initially-promising work with Eclipse's internal ClassLoaderHook type, but eventually determined that this would require more patching than I'd want, if it was possible at all.

I also considered using Java's instrumentation capabilities to intercept class loading, but that would require setting up a special Java agent in the launch parameters, which would be too onerous.

But then I remembered something I had heard about when looking into getting ServiceLoader to work with OSGi: a concept in the OSGi spec called "weaving".

OSGi Weaving

I had noted that this concept existed, but set it aside in large part due to how esoteric it sounds: the term "weaving" makes it sound like it's a way to interact with the threads of fate or something, which is evocative but not something that seems immediately useful.

What it really is, though, is an OSGi-friendly version of the above: when the OSGi runtime goes to load a class from a bundle, it reads the data but then gives any such listeners an opportunity to manipulate the code before it's actually reified into a class. This is how the ServiceLoader mediator does its thing: it looks for ServiceLoader calls during loading and re-"weaves" them to run through OSGi instead.

This was perfect: it provides exactly the hook I want and it does it in a clean, spec-based way, without having to do weird reflection to reassign object properties or anything.

The Implementation

So I went about writing such an implementation. All the pieces are there on Domino, and the mechanism for registering a WeavingHook is something I'd done before in Open Liberty: it's a type of OSGi service that you can register and manage in an Activator class. It's also the sort of thing that would work well with Declarative Services, but Domino doesn't have a DS handler installed and I figured I didn't need to solve that quite yet.

So I wrote a WeavingHook implementation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public class UtilWeavingHook implements WeavingHook {
    @Override
    public void weave(WovenClass c) {
        if("com.ibm.xsp.util.ClassLoaderUtil".equals(c.getClassName())) {
            ClassPool pool = new ClassPool();
            pool.appendClassPath(new LoaderClassPath(ClassLoader.getSystemClassLoader()));
            CtClass cc;
            try(InputStream is = new ByteArrayInputStream(c.getBytes())) {
                cc = pool.makeClass(is);
            } catch (IOException e) {
                throw new UncheckedIOException(e);
            }
            cc.defrost();
            try {
                CtMethod m = cc.getDeclaredMethod("checkProhibitedClassNames");
                m.setBody("{ return false; }");
                c.setBytes(cc.toBytecode());
            } catch(NotFoundException | CannotCompileException | IOException e) {
                new RuntimeException("Encountered exception when weaving ClassLoaderUtil replacement", e).printStackTrace();
            }
        }
    }
}

This builds on the above Javassist usage to now load the class from the byte array provided by OSGi, transform it, and then write the new version back. Since this happens while OSGi is reading the class to begin with, there's never a time when there's an older, less-permissive version of the class running around, as long as I get my service in early enough.

This service is registered in the Activator without too much fuss:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public class JakartaActivator implements BundleActivator {
    private final List<ServiceRegistration<?>> regs = new ArrayList<>();

    @Override
    public void start(BundleContext context) throws Exception {
        regs.add(context.registerService(WeavingHook.class.getName(), new UtilWeavingHook(), null));
    }

    @Override
    public void stop(BundleContext context) throws Exception {
        regs.forEach(ServiceRegistration::unregister);
        regs.clear();
    }
}

The final bit to get right was the "get the service in early enough" aspect. The main task was making sure that this bundle was activated before any XPages apps loaded, and that was a job for my old friend IServiceFactory, which is the extension point that's intended to add handlers for incoming URLs but has the desirable attribute of being initialized right at the very start of HTTP loading.

With this in place, I now have a fix automatically applied to that fiendish class on load, and MicroProfile Config (and future MP specs) works like a charm.

Conclusion

This was an arduous one, and I think the FTL victory jingle actually physically played when I got it working. I've hated this restriction for a long time, and I'm glad to finally have a workaround.

It was also enlightening to properly learn about OSGi's weaving capability. As mentioned above, this is what the ServiceLoader bridge does, and I'd tinkered with that at one point, but never got it working. I suspect now that it should be entirely doable to make it work, most likely also involving bringing in an implementation of the Declarative Services OSGi capability. That should be a fun project in its own right.

Moreover, the fact that I now have a system in place to do this weaving on the fly means that I may be able to un-fork some of the specs I had to fork to get working previously, which specifically required altering ServiceLoader calls. Even if I don't get the official service bridge in, perhaps I can use this technique to just alter the parts I need to on the fly, and otherwise use stock implementations from Maven.

But, for now, the way is cleared for further progress, and a bizarre mystery is solved. I call that a good day.

JSP and MVC Support in the XPages JEE Project

Dec 20, 2021, 11:20 AM

Tags: jakartaee java
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. Adding Concurrency to the XPages Jakarta EE Support Project
  11. Adding Transactions to the XPages Jakarta EE Support Project
  12. XPages Jakarta EE 2.9.0 and Next Steps

Over the weekend, I wrapped up the transition to jakarta.* for my XPages JEE Support project and uploaded it to OpenNTF.

With that in the bag, I decided to investigate adding some other things that I had been itching to get working for a while now: JSP and MVC.

JSP? Isn't That, Like, A Billion Years Old?

Okay, first: shut up.

Expanding on that point, it is indeed pretty old - arriving in 1999 - and its early form was pretty bad. It was designed as an answer to things like PHP and ASP and bore all those scars: it used actual Java syntax on the page to control output, looping, conditionals, and the like. It even had special directives to import Java classes for the page! All that stuff is still in there, too, which isn't great.

However, JSP used judiciously - focusing on JSTL tags for control/looping and EL references to CDI beans for data access - is a splendid little thing, and it has the advantage that it remains part of the JEE spec.

Domino flirted with JSP for a long time. It's what Garnet was all about and was part of how OpenNTF got off the ground. IBM did eventually ship the custom tags, and they ship with Domino to this day, sitting in the data/domino/java directory, gathering dust. Domino also inherited JSP from WebSphere as part of XPages... kind of. It has hooks for using JSP files in Expeditor-container webapps, but the implementation is conspicuously missing - present only in Notes, presumably for some sort of Social nonsense reason.

For better or for worse, none of that matters now anyway: it's all crusty and old and, critically, uses javax.*. I had to go a different route.

JSP Implementation

From what I gather, there's basically only one real open-source JSP implementation: Jasper, which is a part of Tomcat. Basically everyone just uses that, and that works well enough. There are various re-bundlings of it to remove the Tomcat dependencies, and I went with the GlassFish one, since it was pretty clean.

Diving into it, there were a few things that were potential and actual problems.

First, JSP files aren't evaluated directly. Instead, they're compiled into Servlet class implementations, either on the fly or ahead of time. This process is basically the same as how XPages work: the JSP is translated into a Java file, which is then compiled into a class, which is then reused by the runtime for subsequent requests. Jasper has a dependency on Eclipse JDT, which worried me: when I looked into this in the past, I found that JDT (at least how it was used for JSP) makes a lot of assumptions about working with the actual filesystem. I lucked out here, though: Jasper actually uses the JavaCompiler API, which is more flexible. The JDT dependency seems like either a vestige of an older version or a fallback option.

However, despite the fact that JavaCompiler can work purely in memory, Jasper does do a lot of filesystem-bound work when it comes to loading tag libraries, such as JSTL. I ended up having to deploy a bunch of stuff to the filesystem. Ideally, I'll find a better way around this.

Hooking It Up To Domino

Having a JSP interpreter is one thing, but having it respond to URLs like "http://example.com/foo.nsf/bar.jsp" is another, especially if that should also participate in the XPages class space of the NSF.

I originally considered an HttpService implementation that would accept incoming *.jsp URLs. This could work, but it would be less than ideal: the HttpService, while working in the XPages OSGi layer, wouldn't know about the internal layout of the NSF. I'd have to either reinvent it or wrangle my way over to the active NSFService (the one that runs XPages), find or load the NSF's module, and root around in there. Possible, but not ideal.

Fortunately, I lucked out tremendously: the NSFService class has an addHandledExtensions static method that I can just call to tell it that incoming ".jsp" requests should go to the XPages runtime. This looks like it was added for more Social-nonsense reasons, but I'm happy it's there regardless. Better still, when the runtime finds a URL it was told to handle, it queries IServletFactory implementations like those you can use in an NSF for custom servlets. I already had one in place for JAX-RS, so I made another one (refactored since that commit) for JSP.

Once that was in place (plus some other fiddly details), I got to what I wanted: writing JSPs inside an NSF and having them share the XPages class space:

Screenshot of Designer and a browser showing an in-NSF JSP

Next Up: MVC

Once I had JSP in place (and after some troublesome fiddling with JSF), I decided to take a swing at adding my beloved MVC to the stack.

This had its own complications, this time for the inverse problem as JSP. While Jasper is a creature of the early 2000s and uses older, less-flexibile Java APIs that I had to write around, MVC is the opposite. It's a pure child of the modern, CDI-based world and thus does everything through CDI and ServiceLoaders. However, though I've had CDI support in the project for a long time, actually tying together anything to do with CDI or ServiceLoaders in OSGi is eternally difficult, especially on Domino.

Service Loading

I had to wrangle for this for a while, but I eventually came up with a functional-but-odd workaround: I made use of my own custom ServiceParticipant extension capability that lets me have an object perform pre/post behavior around each JAX-RS request in order to futz with the ClassLoader. I had trouble where the NSF ClassLoader didn't find classes from the MVC implementation, though it should have, so I ended up overriding the ClassLoader to first look explicitly there. It's not pretty, but it works and at least it doesn't require filesystem stuff.

Servlets and Request Dispatchers

Another aspect of being a more-modern child than Jasper is that Krazo makes ready use of Servlet capabilities that have been there for a while but which don't exist on Domino.

For example, Krazo uses a ServletContainerInitializer instance to do pre-research in the app to find classes that should get MVC behavior. Without this scan, MVC won't be applied. This is a Servlet 3.0 feature dating to 2009 and Domino doesn't support it - or any kind of annotation-based classpath scanning, for that matter.

Fortunately, I didn't really need to fully support this concept - I really just needed to make sure this ran whenever the JAX-RS support was being loaded for an NSF. So I made it possible to contribute these via an extension point and added my own scanning implementation to gather the applicable types. Essentially, a backport of this feature to apply in an NSF. With that in place, I was able to register the initializer and have it do its work.

My next hurdle was to do with the way Krazo delegates to JSPs. Specifically, it queries the ServletContext (essentially, the app container) for Servlet registrations that can handle the desired extensions (".jsp" and ".jspx" here) and routes to that using a RequestDispatcher. Well, Domino supports none of this. Trying to get a RequestDispatcher is hard-coded to throw an exception saying "Domino doesn't support this" and the bit about getting ServletRegistrations was new in 3.0. Originally, I stubbed these out, but I decided to give a swing at backporting this as well.

While an NSF doesn't have "Servlet registrations" as such, it does have a list of the aforementioned IServletFactory instances, so I decided to write my own. I wrote a getRequestDispatcher implementation that queries the current module's Servlet factories for a match and, when found, return a basic implementation. Then, I wrote a custom subtype of IServletFactory to provide additional information and made use of that to emulate the Servlet 3+ behavior, at least well enough to let Krazo do what it needs.

Seeing It Together

Once I figured out all these hurdles, I got to what I wanted: I can make a JAX-RS service in an NSF that acts as an MVC controller:

Screenshot of Designer and a terminal showing an MVC controller in an NSF

Neat! There are still some rough edges to clean, but it's great to see in action.

Conclusion and Next Steps

So why is this good? Well, there's a certain amount of box-checking going on: the more JEE specs I can get going, the better.

But beyond that, this is helping to crystallize some of my thinking about what Domino (web) developers are even supposed to freaking do nowadays. This remains an extremely-vexing problem, but I know the answer isn't XPages as it exists now. Maybe the answer is to move XPages to a better container or maybe it's to add a better container to Domino (or both of those, I guess). This is another option, one that preserves the "just fire up Designer and edit some code" niceties of the XPages experience while gaining better, more modern capabilities. I could see writing an app with this, doing all my work in CDI beans and using JSP as the front end - pure open-source solutions with active developers - all inside the NSF. Is it the real best answer? I don't know. Maybe. It's something, though, and specifically something worth trying.

Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue

Dec 14, 2021, 4:41 PM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?
  10. Adding Concurrency to the XPages Jakarta EE Support Project
  11. Adding Transactions to the XPages Jakarta EE Support Project
  12. XPages Jakarta EE 2.9.0 and Next Steps

I think it's been a little while since I talked about the XPages Jakarta EE Support project of mine. The goal of that is sort of the inverse of the XPages Runtime project: rather than bringing XPages to a proper modern app server, the JEE Support project brings a handful of current Jakarta EE specs to XPages. It started out a few years ago as a sort of proof-of-concept, but I've since been using it for client work to do things like use newer Jakarta REST (n?e JAX-RS), CDI, and newer EL in XPages and OSGi bundles.

The Specification Move

Originally, I targeted a set of specifications from Java/Jakarta EE 8. Some of these were new to Domino outright, while some (such as JAX-RS) existed in the XPages stack already but in very old forms. I implemented those and for a good while just used the project as a source of parts for client work, tweaking it here and there as needed.

However, the long-prophesized package-name switch from javax.* to jakarta.* came to fruition in Jakarta EE 9, released a bit over a year ago. In the intervening year, most implementations of the specs made the switch, and the versions I was using started to show their age (for example, I was using RESTEasy 3, which was already old when I adopted it, and it's going to 6 now). Beyond just the philosophical sadness of my project being behind, I started to grow specific needs to upgrade components: we switched to JSON-B a while ago, but then some new bug fixes in Yasson were coming only to post-jakarta.* builds.

The Initial Work

I first gave a shot to this in August, initially planning to move only JSON-P and JSON-B over to the new namespace. However, I quickly hit the limits of that, since a lot of these specs are interdependent. JAX-RS using JSON-P and JSON-B to emit JSON content, Yasson has some ties to CDI, and so forth. I realized that it was going to have to be all-or-nothing.

So I rolled up my sleeves and assessed the task ahead of me. At a basic level, there was the job of updating my dependencies, which immediately had some good aspects and bad aspects:

  • Previously, I was using a hodgepodge of spec packages like the JBoss bundling of JAX-RS in order to get something that would work and be license-friendly. Now that it was all over at Eclipse, I could switch to the nice, clean official versions and have no license worries.
  • I also used to have all sorts of OSGi rule overrides to account for Domino-specific oddities like ancient versions of various specs being supplied by the default classpath or other, conflicting bundles, all with no versioning. Once I was looking for e.g. jakarta.annotation instead of javax.annotation, I was no longer bound to that particular nightmare.
  • Not all of my dependencies were ready. When I first started, RESTEasy (my JAX-RS provider of choice) had not yet uploaded a JEE-9-compatible version. My main choices were to try using Eclipse Transformer, which would add a whole new layer to the task, or to switch to another provider.

Then there's the elephant in the room: the freaking Servlet API, which much of this depends on. Since the Servlet API is the job of the web container, I can't realistically upgrade it. Fortunately, that's only half true: I can't give it new capabilities (like Web Sockets), but I can wrap the old stuff with the new. And, like the other specs, the switch of the package name was a tremendous blessing, allowing me to deploy the official Servlet 5 API unchanged. Then, I did the tedious work of writing a slew of adapter classes, half wrapping a javax.servlet component and pretending it's jakarta.servlet and half going the other direction. Since the methods are either direct analogs, optional features, or can be emulated, this was actually much easier than I thought it would be. And there: Servlet 5 on Domino! Kind of!

The Showstopper

However, I soon hit what seemed to be a show-stopper: a LinkageError problem when using CDI that didn't show up previously. My search for the topic found only one hit: an issue in Open Liberty referencing almost exactly the same problem. My heart sank when I read that their fix was to upgrade the Equinox runtime - something that's outside my powers on Domino (probably).

So, disheartened, I set it aside for a couple months. I figured there was a small chance that Weld (the CDI implementation at the heart of the trouble) would put out an update that fixed it - after all, an older version worked.

Resuming Work

After setting it aside, it kept eating away at the back of my mind, and two things kept pushing me to go back to it:

  • I'll need to do it eventually. I (and my client projects) can't just be stuck at the old style forever.
  • I really didn't want to admit defeat and switch back to Gson for JSON processing.

So I went back to it. My initial hope - that a new version of Weld would magically fix the problem - proved to not have come to fruition. Still, though, I wasn't sure that it was the exact same problem Liberty encountered. For one, my use of CDI studiously avoids actually telling it about OSGi, since I've had little luck making use of that with Domino's OSGi stack. That was enough cause to make me think I could work around it.

And work around it I did! The trouble turned out to be, unsurprisingly, a bit esoteric, but boiled down to the runtime re-registering proxy classes for the same core components. My guess is that, somewhere along the line, Weld changed some sort of internal cache in a way that would break when using a bunch of ephemeral per-NSF containers as I do. I implemented my own (since it's an intended extension point) and added a bit of a cache, and I was back to the races.

As a convenient blessing, RESTEasy released 6.0.0.Beta1 just days before I got back to it, a major release targeted at JEE 9. That meant that I could save a ton of work by not having to re-work everything for another JAX-RS implementation. I had been looking into Jersey, which I'm sure would have done the job, but it's fiddly work trying to put all these pieces together on Domino, and I was all the happier to not have to re-do it all.

JavaMail

But then I hit a new problem: the javax.mail API, now jakarta.mail. The first part of this is easy enough: bring in the new spec bundle and everything will point to it. Great! I hit an immediate problem, and one I had been dreading dealing with. Though the spec changed package names, the implementation didn't. That brought me face-to-face again with an old nemesis of mine, sitting there in Domino's classpath, corrupting it:

A screenshot of Domino's ndext directory

The way the Mail API works is that there's a file, called "mailcap", that lists implementations for common data types, like:

1
2
3
4
text/plain;;		x-java-content-handler=com.sun.mail.handlers.text_plain
text/html;;		x-java-content-handler=com.sun.mail.handlers.text_html
text/xml;;		x-java-content-handler=com.sun.mail.handlers.text_xml
multipart/*;;		x-java-content-handler=com.sun.mail.handlers.multipart_mixed; x-java-fallback-entry=true

So, while all the entrypoint classes are jakarta.mail.* now, the implementations remain com.sun.mail.*, all with the same class names. And, since this little jerk of a JAR is sitting in the system classpath, it has a way of showing up all the time, complaining that com.sun.mail.handlers.text_plain is incompatible with jakarta.activation.DataContentHandler.

This is extremely fiddly to deal with, potentially involving writing a special ClassLoader implementation that blocks calls down to the lower-level JAR. While maybe possible, I'm not sure it'd be possible in a way that would be practical for normal use in an app.

And so, with a heavy heart, I forked the thing and added an "org.openntf" in front of all the package names. And that... works! It works just fine. It means that I'm on the hook for manually integrating any upstream changes, but at least it works without having to worry about any conflicts.

That wasn't the end of my trouble with this spec, though. The spec package itself, in jakarta.mail.Session uses ServiceLoader to look for services, and it uses it in the form that looks them up with the current thread's ClassLoader. Because I'm working in OSGi, that ClassLoader - the XPage app's loader - won't know about the implementation classes directly, and this call fails. And, while there's a whole sub-spec in OSGi for dealing with this, I've never had success actually getting it working in Domino.

So I forked that freaking thing too and modified the calls to use its own ClassLoader, which could find the implementation by way of it being a fragment bundle attached to it.

And, with that, finally, I had Jakarta Mail properly hooked up and working without having to jump through too many hoops. I'd still prefer to not have forked the source, but it was the best of a bad lot of choices.

The Final Tally

That brings the specs updated/added in this project to:

  • Expression Language 4.0
  • Contexts and Dependency Injection 3.0
    • Annotations 2.0
    • Interceptors 2.0
    • Dependency Injection 2.0
  • RESTful Web Services (JAX-RS) 3.0
  • Bean Validation 3.0
  • JSON Processing 2.0
  • JSON Binding 2.0
  • XML Binding 3.0
  • Mail 2.1
    • Activation 2.1

Not too shabby, if I say so myself. Technically, Servlet 5.0 is in there, but it doesn't actually bring any newer-than-2.4 powers to the Servlet container, so it's really just infrastructural details.

Now I'll just have the work of updating my client project and finally getting to use whatever that Yasson bug fix was that prompted this in the first place.

Java's Shakier Old APIs

Dec 10, 2021, 11:24 AM

Tags: java

In my last post, I sang the praises of InputStream and OutputStream: two classes from Java 1 that, while not perfect, remain tremendously useful and used everywhere.

Then, a tweet by John Curtis got me thinking about the opposite cases: APIs from the early Java days that are still with us, are still used relatively frequently, but which are best avoided or used very sparingly.

There are a handful of APIs from the early days that may or may not still exist, but which aren't regularly encountered in most of our work: the Applet API, for example, was only recently actually removed, and it was clear for a long time that it wasn't something to use. Some other APIs are more insidious, though. They're right there alongside newer counterparts, and they're not marked as @Deprecated, so you just have to kind of magically know why you shouldn't use them.

Old Collections

One of these troublesome holdovers is a "freebie" for Domino developers: java.util.Vector. This is paired with other "first revision" collection classes like Hashtable, classes that predate the Collections Framework in 1.2 and which were retrofitted into it.

These classes aren't incorrect as such: they do what they're supposed to do and function as working implementations of List and Map. The trouble comes in that they're sub-optimal compared to other options. In particular, they're very-heavily synchronized in a way that hurts performance in the normal cases and isn't even really ideal in the complex multi-threaded case.

Unfortunately, since these classes aren't deprecated, an IDE would only warn you about it if it's using some stylistic validation above normal compilation. Such classes are identified best by looking for a warning paragraph like this at the bottom of their Javadoc:

Javadoc 'old class' warning for Hashtable

java.util.Date

The java.util.Date class has a simple concept: represent a point in time. However, it's a neverending font of limitations and caveats:

  • It's essentially a wrapper for a Unix timestamp in milliseconds precision, and doesn't get more precise
  • It's not immutable even though it'd make sense to be. Effective Java includes repeated examples of why this is bad
  • Though it's called "Date", it's always a single timestamp, and can't represent a day in the abstract
  • In Java 1, it also was responsible for parsing date strings, and this functionality remains (though at least deprecated)
  • As mentioned in the prompting tweet, the DateFormat classes that go with this are not thread-safe, even though one could reasonably assume they would be based on their job
  • There's no concept of time zone, though the string representation would lead one to think there might be
  • The related Calendar class is a little more structured, but in a weird way and having a lot of the same limitations

Nonetheless, Date is the obvious go-to for date/time-related operations due to its age and alluring name. And, in fact, it wasn't even until Java 8 that there was a first-party better option. That's when Java basically adopted Joda Time outright and brought it into Java as the java.time package. This system has what's required: the notion of dates and times as separate entities, time zones both as named entities (like "America/New_York") and just as offsets (like UTC-5:00), and full immutability and thread-safety, and tons else.

Unfortunately, it will be a long time for old habits to die and longer for older code to fade away, so we're stuck with Date for a while, even if only to always call #toInstant on it.

java.io.File

The java.io.File class is kind of similar to Date: it was created in Java 1 as a basic way to work with files on the filesystem. It still does that, and (as far as I know) it's not as outright bad as the above, but it's limited and non-optimal.

In Java 7, the NIO Path API was added, which replaces File in a more-generic and -adaptable way. Whereas File refers specifically to the filesystem, the Path API is adaptable to whatever you'd like while sharing the same semantics. It can also participate in the NIO ecosystem properly.

Much like how Date has a #toInstant method, File has a #toPath method to work with the transition. I make a habit of doing this almost all the time when I'm working with existing code that still uses File. And there is... a lot of this code. Even APIs that can take Path arguments will potentially turn them into Files internally to keep working with their older implementation.

There are also a bunch of related APIs where the replacements exist but aren't quite as straightforward. ZipFile is a perfect example of this: it (and its child class JarFile) has constructors that take either a File or a String representing a file path, and that's alarming. However, the ZIP File System Provider that works with Paths is neat, but it's not as clear of a replacement for ZipFile as Path is for File. That's actually one of the reasons I use ZipInputStream even in a case where ZipFile would also work.

Conclusion

I'm sure there are other similar traps around, but those are the main ones I can think of off the top of my head. It's a bit of a shame that Sun/Oracle have been so historically reticent to mark classes wholesale as deprecated. While IDEs and and toolchains have gotten better at providing "stylistic" recommendations like this, it's been slow going, and it's not universal. The best thing you can do for now is to just know about the newer alternatives and use them enough that the old kinds immediately read as "code smell" when you come across them.

Generating Archive Files On The Fly In Java

Dec 9, 2021, 10:30 AM

Tags: java liberty

When working on version 3.0 of the Domino Open Liberty Runtime, I had occasion to do something I've done in other situations, but it occurred to me that it'd make a good post on its own. Specifically, part of one of the new features involved creating archive data on the fly, purely in-memory, and that's something that comes in handy quite a bit.

Background: The Task

The task at hand in that project involved the way the runtime will deploy custom extension features for Liberty when creating the server. There are a few of these, all centered around adding integration with Domino in one way or another. For the previously-existing Liberty features, this was done in three parts:

  • The actual Liberty extension code, which is a Java project that produces a Liberty-compatible OSGi bundle.
  • A "subsystem" module, which is a code-less Maven project that uses esa-maven-plugin to embed the above bundle and generate a "SUBSYSTEM.MF" file to describe it. This ESA/subsystem bit is a mechanism for distributing packaged features from the OSGi spec.
  • A "deployment" module, which is a small Java project that provides an extension for the Domino-side runtime to house and deploy the above ESA file.

For 3.0, I wanted to make a feature that would provide Notes.jar and the NAPI to application. Since those files are proprietary and non-distributable, I couldn't include them in the actual runtime distribution and would instead have to look them up from Domino's environment at runtime. Additionally, since all I wanted to do was provide the existing API and not add any new code, there was no particular need to make a code project like the first one above.

More Background: Java Streams

The way these extensions are registered to Domino is by classes that provide some metadata about the feature and then a method called getEsaData() that returns an InputStream. Though InputStreams aren't the only way to represent arbitrary binary blobs like this, they're used everywhere by virtue of them arriving with Java 1, and they're extremely adaptable.

Basically, the idea of an InputStream is that it's just a mechanism to read a sequence of bytes from somewhere. In Domino terms, they're like NotesStream, but good.

Their utility comes from their simplicity and adaptability. Because the abstract class only deals with reading bytes and a few operations for skipping around, they can be used for all sorts of things. The prototypical use is for reading a file. For example:

1
2
3
4
Path someFile = Paths.get("/foo/bar/baz.txt");
try(InputStream is = Files.newInputStream(someFile)) {
  // read file data from the stream
}

They're not limited to that, though: the JDK comes with all sorts of InputStream variants like ByteArrayInputStream, which lets you read from a byte[] in memory.

In addition to being arbitrary as to where the byes are coming from, streams are also very composable. Many types of streams either must or may wrap an existing stream to alter it in some way. One of the more-common cases where you'd do this is when reading ZIP file data. Taking something similar to above:

1
2
3
4
5
6
7
8
Path someZipFile = Paths.get("/foo/bar/baz.zip");
try(
  InputStream is = Files.newInputStream(someFile);
  ZipInputStream zis = new ZipInputStream(is, StandardCharsets.UTF_8)
) {
  ZipEntry entry = zis.getFirstEntry();
  // work with the ZIP entries
}

The thing to note here is that, while this happens to be coming from a ZIP file on disk, it doesn't have to be: that first is could just as easily be a stream coming from HttpURLConnection or a ByteArrayInputStream.

Along with InputStream, Java also has OutputStream. Luckily, OutputStream is similarly simply designed, and has uses that are a direct mirror for everything above: there exist ByteArrayOutputStream, ZipOutputStream, and all sorts of others.

Putting It Together

Back to the original goal, my task was to create a class that would provide an InputStream containing ESA data - that is to say, a ZIP file - to the runtime, which could then deploy it as a Liberty feature. The previous extensions did this by embedding the ESA in their JAR and then returning an InputStream to that. Now, though, I wanted to do it all dynamically.

Now, I talked a big game above about how streams didn't have to have anything to do with files, and it could all be done in memory. That's still all true, but technically here I ended up using files for caching purposes. The above is still good to know, though!

So anyway, my goal was to deliver an InputStream to the runtime that represented an ESA that looks like this:

Contents of a generated ESA file

Of those entries, "corba.jar" is the CORBA API from Maven Central to make Notes.jar work on Java 9+, while "Notes.jar" comes from jvm/lib/ext and "lwpd.commons.jar" and "lwpd.domino.napi.jar" come from the OSGi framework in the running Domino server. The remaining entries - the two "MF" files and the embedded JAR - are composed on the fly.

The starting point here is that I identify a cache location within my working directory based on the current Domino build number, and I name that path out. Then, I open it up as a ZIP to fill with contents, like above:

1
2
3
4
5
6
try(
  OutputStream os = Files.newOutputStream(out, StandardOpenOption.CREATE);
 ZipOutputStream zos = new ZipOutputStream(os, StandardCharsets.UTF_8)
) {
  // work happens here
}
SUBSYSTEM.MF

The next part is to build the "SUBSYSTEM.MF" file. As implied by the extension, this file has the same syntax as "MANIFEST.MF" files, and so I can use the java.util.zip.Manifest class to handle encoding and formatting. I start out by loading a template from the current bundle's resources:

1
2
3
4
Manifest subsystem;
try(InputStream is = getClass().getResourceAsStream("/subsystem-template.mf")) { //$NON-NLS-1$
  subsystem = new Manifest(is);
}

There, I'm using the constructor from Manifest that reads from an existing stream. Often, that would be reading a "MANIFEST.MF" from an existing JAR, but it'll work with any stream.

Then, I fill it in with some details, with lines like:

1
2
3
4
5
Attributes attrs = subsystem.getMainAttributes();
attrs.putValue("Subsystem-Name", getShortName()); //$NON-NLS-1$
String featureName = getShortName() + "-" + getFeatureVersion(); //$NON-NLS-1$
attrs.putValue("IBM-ShortName", featureName); //$NON-NLS-1$
// etc.

Finally, I create an entry in the ZipOutputStream and write the contents. The way ZipOutputStream works is that its "stream-iness" counts towards whatever the most-recently-added entry is.

1
2
zos.putNextEntry(new ZipEntry("OSGI-INF/SUBSYSTEM.MF")); //$NON-NLS-1$
subsystem.write(zos);
Embedded Bundle

Alright, so far, so good. Up until now, this is the "normal" case for working with ZIP files, where you make a new entry and pour in some text data. What's neat, though, is that the encapsulation capabilities of these streams can be stacked, which is what comes up next.

Specifically, I wanted to put a ZIP file (the .jar) within this surrounding ZIP (the .esa). The way this is done is by just composing the same tools we've been working with again. Here, esa is what zos was above: the outermost package ZIP contents. I just renamed it in this method for clarity inside the code itself.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// This will be a shell bundle that in turn embeds the API JARs
esa.putNextEntry(new ZipEntry(BUNDLE_NAME + "_" + dominoVersion + ".0.0.jar")); //$NON-NLS-1$ //$NON-NLS-2$
		
// Build the embedded JAR contents
try(ZipOutputStream zos = new ZipOutputStream(esa, StandardCharsets.UTF_8)) {
  Manifest manifest;
  try(InputStream is = getClass().getResourceAsStream("/manifest-template.mf")) { //$NON-NLS-1$
    manifest = new Manifest(is);
  }
  
  // Finish manifest
  // More work here
}

So there, I'm doing basically the same thing as I did originally, to make a ZipOutputStream. Since the ZipOutputStream really doesn't care what the stream is writing to is, it works just as well when writing to a file as when writing to another ZIP stream - the cascading streams handle their own encoding and it works out in the end.

Once I write the manifest, I can make use of the Files utility class to embed each of the JARs from the filesystem:

1
2
3
4
for(Path jar : embeds) {
  zos.putNextEntry(new ZipEntry(jar.getFileName().toString()));
  Files.copy(jar, zos);
}

Finally, I download the CORBA JAR on the fly, so for that one I use a utility function to download from the remote URL:

1
2
3
4
5
OpenLibertyUtil.download(new URL(URL_CORBA), is -> {
  zos.putNextEntry(new ZipEntry("corba.jar")); //$NON-NLS-1$
  IOUtils.copy(is, zos);
  return null;
});

Here, I use IOUtils from Apache Commons IO because it's not copying from a filesystem path, but the idea is basically the same, and exactly the same as far as the destination ZIP is concerned.

The Final Result

Once this is all written to the cached file on the filesystem, the final result is just to return a stream from it:

1
return Files.newInputStream(out);

Since the job of this extension class is only to return an InputStream, the consuming code doesn't care that the extension did all this work, as opposed to the other ones that just return a stream of an embedded resource: everything else is the same.

So, all in all, this isn't a groundbreaking new technique, but that's the point: the way these lower-level JDK components work, you get a tremendous amount of flexibility from just a few common parts.

Journeys Debugging Open Liberty and MVC

Nov 30, 2021, 4:40 PM

I mentioned in my last post that I've been tinkering with a modern structure for OpenNTF's web site as a side project. In that, I talked about how I've been going with Jakarta MVC for the front end, but ran into an odd problem with the latest versions in Open Liberty, and that was the impetus to tinkering with ERB.

Well, I decided to go back and take a swing at trying to make JSP work in this case, since it's (still) a good engine for this purpose, and it could be a fun experiment. I was indeed able to do it, and I think the path I took is worth chronicling here.

Context

In previous projects, such as this blog, I've used older versions of the software stack involved - basically, Jakarta 8, which is before the "big bang" switch from the javax.* to jakarta.* package namespace. Since this is a clean new app, I really want to lean to the newest versions across the board, so I pegged my plans to that.

Though Jakarta EE 9 and 9.1 (the Java 11 official version) have been out for a bit, the switchover comes with the sort of turbulence one would expect. Until just this past week, Open Liberty supported JEE 9 only in beta releases - these have historically been plenty stable for me, but it's always asking for trouble. Even with that non-beta version out, I found myself still on the beta track: I'm addicted to using MicroProfile Config, and MicroProfile's move to JEE 9 support is still itself in the RC stage.

So, okay, betas it is.

The Problem

Once I set everything up on JEE 9 and MP 5, I hit this exception when trying to render a JSP via an MVC Controller object:

java.lang.RuntimeException: SRV.8.2: RequestWrapper objects must extend ServletRequestWrapper or HttpServletRequestWrapper
  at com.ibm.wsspi.webcontainer.util.ServletUtil.unwrapRequest(ServletUtil.java:89)
  at [internal classes]
  at org.eclipse.krazo.engine.ServletViewEngine.forwardRequest(ServletViewEngine.java:135)
  at org.eclipse.krazo.engine.JspViewEngine.processView(JspViewEngine.java:58)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:159)
  at org.eclipse.krazo.core.ViewableWriter.writeTo(ViewableWriter.java:1)
  at org.jboss.resteasy.core.interception.jaxrs.ServerWriterInterceptorContext.lambda$writeTo$1(ServerWriterInterceptorContext.java:79)
  ... 4 more

The short of it is that Krazo (the MVC implementation) passes JSP rendering along to the app container, rather than doing its own JSP work, which makes perfect sense. This, however, hits trouble within Liberty's ServletUtil class, which attempts to "unwrap" the incoming HttpServletRequest object to find the core Liberty-specific object to use extended methods on.

Normally, this sort of thing would work fine: every app server has its own variant of HttpServletRequest for its own uses, and it's perfectly reasonable to do this kind of unwrapping. However, for some reason, this was going awry.

The specific code from Krazo that calls down into Liberty code does do new HttpServletRequestWrapper(request), but that's also legal: the unwrapRequest method is intended specifically to unwrap spec-standard HttpServletRequestWrapper objects like that. So that's not our culprit, and I had to dig deeper.

Investigation

To start my investigation, I knew I'd need to work with the Krazo layer. Fortunately, though many of the moving parts here are baked into the Liberty server, Krazo is not - I include it as a Maven dependency in my app. So I cloned the Krazo source, added it to my workspace, and set my dependency on the SNAPSHOT version, allowing me to do my work inside Krazo's classes.

Context

So what was going on? At first, I thought that maybe something had snuck in a javax.* class somewhere - old code that wasn't fully migrated to JEE 9. That would certainly cause the trouble: javax.servlet.ServletRequestWrapper and jakarta.servlet.ServletRequestWrapper are, to the JVM's eyes, entirely-unrelated classes with no compatibility whatsoever. And, indeed, looking at the source of ServletUtil could give one that impression right away, since the code uses javax.*.

That's not the trouble, however. Though the class is written to javax.*, I gather that it's run through Eclipse Transporter during packaging of the app server, and the actual class that's involved uses jakarta.*. Okay, that's good to know and makes sense, but it also doesn't get us any closer to the root problem.

For my next step, I wanted to figure out what, specifically, it was looking for. The unwrapRequest method takes a Class<?> parameter to find the needed request type, but the stack trace above hid the path it took to get there. By attaching a debugger to the server, I gleaned that it was being called by the unwrapRequest variant above it that looks specifically for a com.ibm.wsspi.webcontainer.servlet.IExtendedRequest.

Okay, so I have the name of the interface it's looking for - I can work with this. My next step was to try to get a programmatic handle on it. The basic approach, when you don't have the class as part of your project, it to look it up via:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest");

That doesn't work here, though: though the app server definitely has that class, it (properly) shields the running application from accessing the class directly, so that the app runtime isn't contaminated by the surrounding server code.

What I needed to do next was find a ClassLoader that does know about it, and the best way to do that is to find a class provided from outside the running app and ask that. Fortunately, the incoming request is exactly that. So:

1
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());

What that does is ask whatever ClassLoader the request object comes from - that is to say, the container's loader - to find the class. And it worked! Now I could test to verify whether the core request matches the type needed:

1
2
Class<?> requestClass = Class.forName("com.ibm.wsspi.webcontainer.servlet.IExtendedRequest", false, request.getClass().getClassLoader());
System.out.println("does it match? " + requestClass.isInstance(request));

As expected, that resolved to false. In this case, that's good, since it'd have been a much-worse problem if it hadn't. But what is the request object, anyway? Well:

1
2
System.out.println(request.getClass());
// output: jdk.proxy15.$Proxy68

Right, okay, that makes sense: all sorts of stuff uses proxy objects in Java, not the least of which being the CDI environment running the whole show.

Cracking Open Proxies

So I had some good information at this point: the HttpServletRequest that Krazo is handed is a proxy object, but Liberty has a hard requirement that its dispatcher is given an instance of IExtendedRequest, which this is not. That means that something in the stack is taking the original Liberty request object and making a proxy for it - fair enough, but inconvenient for me.

My next thought was that maybe I could track down the type of proxy object it is and, with that knowledge, get the underlying delegate request. That's a common-enough pattern: have an instance property in your proxy class that contains the delegate, and (if I'm lucky) have it accessible via a getter. Java's java.lang.reflect.Proxy class has a static method for determining the object that actually handles called methods:

1
2
System.out.println(Proxy.getInvocationHandler(request));
// output: org.jboss.resteasy.core.ContextParameterInjector$GenericDelegatingProxy@fc04422d

This was starting to come together all the more. Liberty recently switched from Apache CXF to RESTEasy for its JAX-RS implementation, and that could explain why this is trouble now when it wasn't before. More importantly for my immediate needs, that also gave me a lead to track down the proxy class.

However, though it was easy enough to find, my heart sank a bit at what I found: rather than having an easy instance property to get the real request, it uses an object from its container class and in turn asks that for the request. Maybe I'd be able to get to that via reflection, but the prospect of figuring out how to work with nested class contexts caused me to try to look around elsewhere instead.

CDI

Another potential answer came to me in a flash: CDI! Access to the CDI environment is standardized, and maybe I could fetch the original request from there. It'd be extremely likely that it'd just hand me back a similar proxy, but it'd be worth a shot. So here we go:

1
2
3
HttpServletRequest cdiReq = CDI.current().select(HttpServletRequest.class).get();
System.out.println(cdiReq);
// output: com.ibm.ws.webcontainer40.srt.SRTServletRequest40@1a22712c

Oh! Good! That's one of Liberty's internal types! Is it the object I need, though? Well... no. Crap. requestClass.isInstance(cdiReq) is false, so this didn't really get me very far. That's a shame, too, since that solution could have involved no implementation-specific code at all.

Internal Liberty Classes

My next thought was that I should try to find another way to get around to finding the true request object. I looked back through the debug stack to find where it was originally calling unwrapRequest to get a bit more context:

1
2
3
WebContainerRequestState reqState = WebContainerRequestState.getInstance(false);

wasReq = (IExtendedRequest) ServletUtil.unwrapRequest(request);

Okay, so what's this with WebContainerRequestState? That sure smells like an object that's meant to be a request-wide object to get to all sorts of state. If I were to write such a class, I'd use it to stash the incoming request as well as any other incidental data that I wouldn't want to ferret away in a way that might leak into the app. I was a little wary that WebAppRequestDispatcher didn't use it to get the IExtendedRequest, but maybe I'd luck out.

And boy, did I! Looking down the source of the file, I found my mark: public IExtendedRequest getCurrentThreadsIExtendedRequest().

The (Provisional) Solution

Now, I had all the tools I needed. Towards the bottom of Krazo's ServletViewEngine class, I conjured up this reflective incantation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
try {
  Class<?> requestStateClass = Class.forName("com.ibm.wsspi.webcontainer.WebContainerRequestState", false, request.getClass().getClassLoader());
  Method getInstance = requestStateClass.getDeclaredMethod("getInstance", boolean.class);
  Object requestState = getInstance.invoke(null, false);
  Method getCurrentThreadsIExtendedRequest = requestStateClass.getDeclaredMethod("getCurrentThreadsIExtendedRequest");
  request = (HttpServletRequest)getCurrentThreadsIExtendedRequest.invoke(requestState);
} catch (Throwable e1) {
  // Not on WAS
}
rd.forward(new HttpServletRequestWrapper(request), new HttpServletResponseWrapper(response));

So what I'm doing here is breaking into the container's ClassLoader in order to get a handle on WebContainerRequestState. From there, I'm able to call the method to get the current instance, and then in turn call the method to get the IExtendedRequest. I overwrite the request variable we're working with, and then pass that along to the dispatcher. If any of that were to fail, I just throw up my hands, assume I'm not on Liberty, and continue on as before.

And... it works! It actually works! The pages now render properly, with all the niceness of modern JSP at my fingertips! It was fun to toy with the idea of ERB, but I like this better for an otherwise-pure-Java app for sure.

Next Steps

So I have a solution that works for me, but it's so ugly and implementation-specific that I can't exactly be comfortable with it. Still, the trouble comes from an implementation-specific source, so that may be required. Maybe I'll have to leave it like this.

More responsibly, though, what I should do is narrow this down into a reproducible case without all the other moving parts in the app to make sure that it's actually a bug/incompatibility, and thus something that I can report. This is all open-source software, after all, and it'd do nobody any good for me to let a potential actual problem linger. I'll just have to properly identify where the true culprit is first. Is it because Krazo uses RequestDispatcher in a somewhat-unusual way? Is it because RESTEasy is too aggressive about wrapping requests with no proper way to get to the delegate? Is it that Liberty should handle this case better internally? Or maybe it's just some side effect of the other things I have going on. Research is warranted.

In the mean time, that was a fun one. I don't know that I'll have a need for this specific solution again, but it was good to find, and it's always good to get some troubleshooting practice like this for sure.

Writing A Custom ViewEngine For Jakarta MVC

Nov 2, 2021, 2:31 PM

One of the very-long-term side projects I have going on is a rewrite of OpenNTF's site. While the one we have has served us well, we have a lot of ideas about how we want to present projects differently and similar changes to make, so this is as good a time as any to go whole-hog.

The specific hog in question involves an opportunity to use modern Jakarta EE technologies by way of my Domino Open Liberty Runtime project, as I do with my blog here. And that means I can, also like this blog, use the delightful Jakarta MVC Spec.

However, when moving to JEE 9.1, I ran into some trouble with the current Open Liberty beta and its handling of JSP as the view template engine. At some point, I plan to investigate to see if the bug is on my side or in Liberty's (it is beta, in this case), but in the mean time it gave my brain an opportunity to wander: in theory, I could use ERB (a Ruby-based templating engine) for this purpose. I started to look around, and it turns out the basics of such a thing aren't difficult at all, and I figure it's worth documenting this revelation.

MVC ViewEngines

The way the MVC spec works, you have a JAX-RS endpoint that returns a string or is annotated with a view-template name, and that's how the framework determines what page to use to render the request. For example:

1
2
3
4
5
6
7
8
9
@Path("home")
@GET
@Produces(MediaType.TEXT_HTML)
public String get() {
  models.put("recentReleases", projectReleases.getRecentReleases(30)); //$NON-NLS-1$
  models.put("blogEntries", blogEntries.getEntries(5)); //$NON-NLS-1$

  return "home.jsp"; //$NON-NLS-1$
}

Here, the controller method does a little work to load required model data for the page and then hands it off to the view engine, identified here by returning "home.jsp", which is in turn loaded from WEB-INF/views/home.jsp in the app.

In the framework, it looks through instances of ViewEngine to find one that can handle the named page. The default spec implementation ships with a few of these, and JspViewEngine is the one that handles view names ending with .jsp or .jspx. The contract for ViewEngine is pretty simple:

1
2
3
4
public interface ViewEngine {
  boolean supports(String view);
  void processView(ViewEngineContext context) throws ViewEngineException;
}

So basically, one method to check whether the engine can handle a given view name and another one to actually handle it if it returned true earlier.

Writing A Custom Engine

With this in mind, I set about writing a basic ErbViewEngine to see how practical it'd be. I added JRuby to my dependencies and then made my basic class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
@Priority(ViewEngine.PRIORITY_APPLICATION)
public class ErbViewEngine extends ViewEngineBase {

  @Inject
  private ServletContext servletContext;

  @Override
  public boolean supports(String view) {
    return String.valueOf(view).endsWith(".erb"); //$NON-NLS-1$
  }

  @Override
  public void processView(ViewEngineContext context) throws ViewEngineException {
    // ...
  }
}

At the top, you see how a custom ViewEngine is registered: it's done by way of making your class a CDI bean in the application scope, and then it's good form to mark it with a @Priority of the application level stored in the interface. Extending ViewEngineBase gets you a handful of utility classes, so you don't have to, for example, hard-code WEB-INF/views into your lookup code. The bit with ServletContext is there because it becomes useful in implementation below - it's not part of the contractual requirement.

And that's basically the gist of hooking up your custom engine. The devil is in the implementation details, for sure, but that processView is an empty canvas for your work, and you're not responsible for the other fiddly details that may be involved.

First-Pass ERB Implementation

Though the above covers the main concept of this post, I figure it won't hurt to discuss the provisional implementation I have a bit more. There are a couple ways to use JRuby in a Java app, but the way I'm most familiar with is using JSR 223, which is a generic way to access scripting languages in Java. With it, you can populate contextual objects and settings and then execute a script in the target language. The Krazo MVC implementation actually comes with a generic Jsr223ViewEngine that lets you use any such language by extension.

In my case, the task at hand is to read in the ERB template, load up the Ruby interpreter, and then pass it a small script that uses the in-Ruby ERB class to render the final page. This basic implementation looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
@Override
public void processView(ViewEngineContext context) throws ViewEngineException {
  Charset charset = resolveCharsetAndSetContentType(context);

  String res = resolveView(context);
  String template;
  try {
    // From Apache Commons IO
    template = IOUtils.resourceToString(res, StandardCharsets.UTF_8);
  } catch (IOException e) {
    throw new ViewEngineException("Unable to read view", e);
  }

  ScriptEngineManager scriptEngineManager = new ScriptEngineManager();
  ScriptEngine scriptEngine = scriptEngineManager.getEngineByExtension("rb"); //$NON-NLS-1$
  Object responseObject;
  try {
    Bindings bindings = scriptEngine.createBindings();
    bindings.put("models", context.getModels().asMap());
    bindings.put("__template", template);
    responseObject = scriptEngine.eval("require 'erb'\nERB.new(__template).result(binding)", bindings);
  } catch (ScriptException e) {
    throw new ViewEngineException("Unable to execute script", e);
  }

  try (Writer writer = new OutputStreamWriter(context.getOutputStream(), charset)) {
    writer.write(String.valueOf(responseObject));
  } catch (IOException e) {
    throw new ViewEngineException("Unable to write response", e);
  }
}

The resolveCharsetAndSetContentType and resolveView methods come from ViewEngineBase and do basically what their names imply. The rest of the code here reads in the ERB template file and passes it to the script engine. This is extremely similar to the generic JSR 223 implementation, but diverges in that the actual Ruby code is always the same, since it exists just to evaluate the template.

If I continue down this path, I'll put in some time to make this more streamable and to provide access to CDI beans, but it did the job to prove that it's quite doable.

All in all, I found this exactly as pleasant and straightforward as it should be.

Adding Selenium Browser Tests to My Testcontainers Setup

Jul 20, 2021, 11:20 AM

  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

Yesterday, I talked about how I dove into Testcontainers for my app-testing needs. Today, I decided to use this to close another bit of long-open business: automated browser testing. I've been very much a dilettante when it comes to that, but we have a handful of browser-ish tests just to make sure the login page, the main page, and some utility pages load up and include expected content, and those can serve as a foundation for much more.

Background

In general, when you think "automated browser testing", that means Selenium. As a toolkit, Selenium has hooks for the browsers you want and has essentially universal support, working smoothly in Java with JUnit. However, the actual act of loading a real browser is miserable, mostly on account of needing you to install the browser and point to it programmatically, which is doable but is another potential system-specific configuration that I'd much, much rather avoid in my automated builds.

Accordingly, and because my needs have been simple, I've used HtmlUnit, which is a portable Java browser-like library that does the yeoman's work of letting you perform basic Selenium tests without having to configure actual native OS installations. It's neat, imposes basically no strictures on your workflow, and I recommend it for lots of uses. Still, it's not the same as real browsers, and I had to do things like disable JavaScript processing to avoid it tripping up on some funky JS that full-grown browsers can deal with.

Enter Webdriver Containers

So, now that I had Testcontainers configured to run the web app, my eye turned to Webdriver Containers, an ancillary capability of Testcontainers that lets you run these full-fledged browsers via their Docker images, and even has cool abilities like letting you record the screen interactions over VNC. Portability and full production representation? Sign me up.

The initial setup was pretty easy, just adding some dependencies for the Selenium remote driver (replacing my HtmlUnit driver) and the Testcontainers Selenium module:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-remote-driver</artifactId>
    <version>3.141.59</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>selenium</artifactId>
    <version>1.15.3</version>
    <scope>test</scope>
</dependency>

Programmatic Container Setup

After that, my next task was to configure the containers. I'll skip over some of my troubleshooting and just describe where I ended up. Basically, since both the webapp and browsers are in Docker containers, I had to coordinate how they communicate with each other. There seem to be a few ways to do this, but the route I went was to build a Docker network in my container orchestration class, bind all of the containers to it, and then reference the app via a network alias.

With that addition and some containers for Chrome and Firefox, the class looks more like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
public enum AppTestContainers {
    instance;
    
    public final Network network = Network.builder()
        .driver("bridge") //$NON-NLS-1$
        .build();
    public final GenericContainer<?> webapp;
    public final BrowserWebDriverContainer<?> chrome;
    public final BrowserWebDriverContainer<?> firefox;
    
    @SuppressWarnings("resource")
    private AppTestContainers() {
        webapp = new GenericContainer<>(DockerImageName.parse("client-webapp-test:1.0.0-SNAPSHOT")) //$NON-NLS-1$
                .withExposedPorts(8080)
                .withNetwork(network)
                .withNetworkAliases("client-webapp-test"); //$NON-NLS-1$
        
        chrome = new BrowserWebDriverContainer<>()
            .withCapabilities(new ChromeOptions())
            .withNetwork(network);
        firefox = new BrowserWebDriverContainer<>()
            .withCapabilities(new FirefoxOptions())
            .withNetwork(network);

        webapp.start();
        chrome.start();
        firefox.start();
        
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            webapp.close();
            chrome.close();
            firefox.close();
            network.close();
        }));
    }
}

Now that they're all on the same Docker network, the browser containers are able to refer to the webapp like "http://client-webapp-test:8080".

Adding Parameterized Tests

The handful of UI tests I'd set up previously had lines like WebDriver driver = new HtmlUnitDriver(BrowserVersion.FIREFOX, true) to create their WebDriver instance, but now I want to run the tests with both real Firefox and real Chrome. Since I want to test that the app works consistently, I'll want the same tests across browsers - and that's a call for parameterized tests in JUnit.

The way parameterized tests work in JUnit is that you declare a test as being parameterized, and then feed it your parameters via one of a number of mechanisms - "all values of an enum", "this array of strings", and a handful of others. The one to use here is to make a class implementing ArgumentsProvider and configure that:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
import java.util.stream.Stream;

import org.junit.jupiter.api.extension.ExtensionContext;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.ArgumentsProvider;
import org.testcontainers.containers.BrowserWebDriverContainer;

public class BrowserArgumentsProvider implements ArgumentsProvider {
    @Override
    public Stream<? extends Arguments> provideArguments(ExtensionContext context) throws Exception {
        return Stream.of(
            AppTestContainers.instance.chrome,
            AppTestContainers.instance.firefox
        )
        .map(BrowserWebDriverContainer::getWebDriver)
        .map(Arguments::of);
    }
}

This class will take my configured browser containers, get the WebDriver instance for each, and provide that as parameters to a test method. In turn, the test method looks like this:

1
2
3
4
5
6
7
8
@ParameterizedTest
@ArgumentsSource(BrowserArgumentsProvider.class)
public void testDefaultLoginPage(WebDriver driver) {
    driver.get(getContainerRootUrl());
    assertEquals("Expected App Title", driver.getTitle());

    // Other tests follow
}

Now, JUnit will run the test twice, once for each browser, and I can add any other configurations I want smoothly.

Minor Gotcha: Container vs. Non-Container URLs

Though some of my tests were using Selenium already, most of them just use the JAX-RS REST client from the testing JVM directly, which is not containerized in this setup. That meant that I had to start worrying about the distinction between the URLs - the containers can't access "localhost:(some random port)", while the JUnit JVM can't access "client-webapp-test:8080".

For the most part, that's not too tough: I added some more utility methods named to suit and changed the UI tests to use those. However, there was one tricky bit: one of the UI tests uses Selenium to fetch the page and process the HTML, but then uses the JAX-RS client to make sure that a bunch of references on the page resolve to non-404 resources properly. Stuff like this:

1
2
3
4
5
driver.findElements(By.xpath("//link[@rel='stylesheet']"))
    .stream()
    .map(link -> link.getAttribute("href"))
    .map(href -> rootUri.resolve(href))
    .forEach(uri -> checkUrlWorks(uri, jaxRsClient));

(It's highly likely that there's a better way to do this in Selenium, but hey, it's still a useful example.)

The trouble with the above was that the URLs coming out of Selenium included the full container URL, not the host-accessible one.

Fortunately, that's not too tricky - it's really just string substitution, since the host and container URLs are known at runtime and won't conflict with anything. So I added a "decontainerize" method and run my URLs through it in the stream:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
public URI decontainerize(URI uri) {
    String url = uri.toString();
    if(url.startsWith(getContainerRootUrl())) {
        return URI.create(getRootUrl() + url.substring(getContainerRootUrl().length()));
    } else {
        return uri;
    }
}

// later

driver.findElements(By.xpath("//link[@rel='stylesheet']"))
    .stream()
    .map(link -> link.getAttribute("href"))
    .map(href -> rootUri.resolve(href))
    .map(this::decontainerize)
    .forEach(uri -> checkUrlWorks(uri, jaxRsClient));

With that, all the results came back green again.

Overall, this was a little fiddly, but mostly in a way that helped me learn a little bit more about how this sort of thing works, and now I'm prepped to do real, portable full test suites. Neat!

Java Object Proxies (Not the Networking Kind)

Jul 9, 2021, 2:39 PM

Tags: java

Though my previous post was also about proxies, I've had this topic percolating for a while, and it's related mostly by name and very-loose concept. Specifically, I'd like to talk about dynamic proxy classes in Java, which is a mechanism to allow you to create "fake" classes that programmatically intercept method calls.

This is something that I didn't properly realize was possible in Java until diving into CDI (though its mechanism is slightly different). In retrospect (and by "@since" annotation), it's obvious that this has been present for a long time, since Java 1.3, but outside of my realm of experience.

Definition, the Roundabout Way

So, to begin with, I'll have to define what I even mean by this, or at least an example of how it works in practice.

Normally, you just have classes and objects based on them - class Foo gets instantiated via new Foo() and there's a pretty clear direct relationship, "Object-Oriented Programming 101" sort of stuff. Let's add a little indirection by way of our old friend Interfaces. Say you have this setup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
interface Person {
    String getName();
}

class PersonImpl implements Person {
    @Override
    public String getName() {
        return "Joe Schmoe";
    }
}

// ...

Person foo = new PersonImpl();
System.out.println("Hello from " + foo.getName());

This is essentially the same kind of thing that you're doing in the original concept - instantiating an object that you can call methods on - but you're taking a step back. You know here that you're just calling new PersonImpl(), but that's less of a hard requirement: you could instead do Person foo = lookupPerson("Joe Schmoe") and that method could return any implementation of the interface it likes.

So let's do just that, and here's where we see proxies:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class PersonProxy implements InvocationHandler {
    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
        if("getName".equals(method.getName())) {
            return "Proxy-Eyed Joe";
        }
        throw new UnsupportedOperationException("I don't know how to handle " + method);
    }
}

// ...

public Person lookupPerson(String name) {
    return (Person)Proxy.newProxyInstance(Thread.currentThread().getContextClassLoader(), new Class<?>[] { Person.class }, new PersonProxy());
}

For the code calling lookupPerson, this will function largely identically to if you used a normal class, but it's got this weirdness going on beneath. When code calls getPerson (or any other method), rather than calling a directly-implemented method like in the normal class, it instead calls invoke on this InvocationHandler, which can then handle it any way it'd like. If you've used more-dyanmic languages, this may look similar to method_missing in Ruby or invocation forwarding in Objective-C.

Non-Thorough Mention of Other Proxy Types

Before continuing into why you'd want to use this, I think it's important to note that the Proxy object in question above is java.lang.reflect.Proxy, which is a built-in mechanism that ships with the JVM. However, it's both limited and not the only game in town. The main way it's limited is that you can only proxy to interfaces with it, not normal classes. If Person above were class Person instead of interface Person, then the stock Proxy class would be out of luck.

There are other implementations, though - the ones that spring to mind are cglib and Javassist, but I believe there are others. These differ in their implementation (often doing things like bytecode manipulation), capabilities (these generally allow you to proxy to full classes), and performance characteristics. The concepts are largely the same, though, so from now on I'll use "proxying" to refer to the concept generally and not solely to the specific capability that comes with Java.

So What's the Use?

Okay, so you can make a proxy object that allows for programmatic handling of method calls. How would this actually be useful in practice?

In my work, I've had a couple cases where I implement proxies for specific behaviors, and they're also extremely common in Jakarta EE and Spring development. Some of these uses can get pretty arcane in concept or implementation, but others will hopefully be simpler to demonstrate.

Example 1: Counting Method Calls

For this case, say you want to count how many times a method is called in practice (and also say that YourKit doesn't exist for some reason). Taking our example InvocationHandler above, we can expand it to keep a running total of calls:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
class PersonProxy implements InvocationHandler {
    private final Map<Method, AtomicLong> counter = new HashMap<>();

    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
        this.counter.computeIfAbsent(method, key -> new AtomicLong()).incrementAndGet();

        if("getName".equals(method.getName())) {
            return "Proxy-Eyed Joe";
        }
        throw new UnsupportedOperationException("I don't know how to handle " + method);
    }
}

(I use AtomicLong here because it's a convenient number holder, but it's also incidentally a step in the direction of making this thread-safe)

Now, as any method is called on your object, you'll get a count of invocations. You can imagine elsewhere having an admin console that lists totals for each object, giving you an idea of where the performance-sensitive parts of your code likely are.

This is, in fact, what MicroProfile Metrics does, albeit in a more-flexible and -complete way than this example. That spec uses annotations to define what you want tracked and how, and then the CDI-based proxy objects can keep track of counts and execution times.

Example 2: Performance Improvements

For this case, imagine you have a model framework in Domino where you have classes representing back-end documents. The loading of these documents can get pretty expensive, especially if you implemented it in a traditional way where your code loads up all values for the front-end Java class from the document at once. However, say you also have a mechanism to look up these documents in bulk via views, where values from the document might be already indexed and readable faster without having to crack the whole thing open. You might end up with a proxy class like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
class PersonModelProxy implements InvocationHandler {
    private final ViewEntry viewEntry;
    private PersonImpl realDoc;

    public PersonModelProxy(ViewEntry viewEntry) {
        this.viewEntry = viewEntry;
    }

    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
        if("getId".equals(method.getName())) {
            return viewEntry.getUniversalID();
        }

        PersonModel obj = someExpensiveObjectLoad();
        return method.invoke(obj, args);
    }
}

PersonModel person = fetchModelWithProxy();
String unid = person.getId(); // Fast!
String firstName = person.getFirstName(); // Slow, but could be made fast

Here, because the UNID will already be present in the view entry, we can just return that immediately rather than loading the full backend document. If you expand this to apply to other properties that can come from view columns, you can do efficient batch lookups and tables while not having to have a separate "read from view instead of doc" mechanism on the front end.

This is something I'm doing (using Javassist's proxies) with a client project to put view entries over existing model objects that had accrued over the years. This allows us to keep the same logic while massively speeding up operations that don't actually need the document to be loaded.

Example 3: Repetitive but Predictable Code

If you have behavior that can be reliably derived from some non-algorithmic source (say, from annotations, app configuration, or so forth), proxies can help you write adaptive code once that avoids the need to write tons of boilerplate methods or classes.

As an example, I'll expand a bit on the first notion above, which is profiling. A long time ago, the erstwhile XPages team released the XPages Toolbox, which provides a reasonably-fine-grained view into what takes up the time during an XPages request. It also provides a way for your code to opt into this profiling, but the idiom is extremely repetitive. You can see it in the Extension Library, and it tends to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
public class SomeBean {
    private static final ProfilerType profilerType = new ProfilerType("SomeBean");

    public void doFoo() {
        if(Profiler.isEnabled()) {
            ProfilerAggregator agg = Profiler.startProfileBlock(profilerType, "doFoo");
            long ts = Profile.getCurrentTime();
            try {
                _realDoFoo();
            } finally {
                Profiler.endProfileBlock(agg, ts);
            }
        } else {
            _realDoFoo();
        }
    }

    private void _realDoFoo() {
        // Actual code goes here
    }
}

That's certainly explicable, but imagine writing that for all methods, or even for a large chunk of methods you want to optimize. That could be done a little more cleanly now with Java 8+ features, but it'd still be a drag.

If we go back to the idea handled by MicroProfile Metrics, it would instead look more like:

1
2
3
4
5
6
7
@Profiler(name="SomeBean")
public class SomeBean {
    @Timed(name="doFoo")
    public void doFoo() {
        // Actual code goes here
    }
}

That's a little nicer! In practice, this is really done with CDI, which provides its own nice layer around proxies, but you could see implementing it with IBM's profiler and a proxy like so (forgive the fragility of the code):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
class ProfilerProxy implements InvocationHandler {
    private final Map<Class<?>, ProfilerType> profilers = new HashMap<>();

    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
        ProfilerType profilerType = profilers.computeIfAbsent(proxy.getClass(), c -> new ProfilerType(c.getAnnotation(Profiler.class).name()));

        if(Profiler.isEnabled()) {
            String metricName = method.getAnnotation(Timed.class).name();
            ProfilerAggregator agg = Profiler.startProfilerBlock(profilerType, metricName);
            long ts = Profiler.getCurrentTime();
            try {
                method.invoke(obj, args);
            } finally {
                Profiler.endProfilerBlock(agg, ts);
            }
        } else {
            method.invoke(obj, args);
        }
    }
}

Now you only write the boilerplate once and it'll work on anything, or at least anything you're proxying.

Summary

In general Proxies are the sort of thing where it's somewhat rare that you'll have occasion to write one "directly" yourself, but they're immediately useful in the few times when that matters. They're also vital to know about for our brave new CDI world, since they explain so much of how newer Java standards do what they do - I didn't even go into things like how JNoSQL determines desired behavior just from method name, for example. This one's a deep rabbit hole, but it's extremely useful to know it's even possible, even before you get to putting it to use.

Codicil: Performance

Using these proxy objects quickly brings a curious thought to one's mind: what kind of overhead is there? Do I have to worry about performance degradation if there's this extra layer of indirection?

The long answer is that it's complicated. The short answer, though, is that you will almost definitely not have to worry. Certainly, there's going to be some inherent overhead just because more stuff is happening, but in general that extra work is dwarfed by the actual business of your business logic. If you have a method that computes a complicated value, or fetches something from a database, or (lord help your profiler) makes a remote network call, the overhead of the proxy is going to be several orders of magnitude smaller than the action being performed.

That said, it doesn't necessarily hurt to keep the idea of this impact in mind. While very little in a web application will be harmed in any perceptible way by proxying, there's always a chance that you'll run into a situation where a simple method is called thousands and thousands of times more often than more-complex ones, and that's where you may want to either move it to a non-proxied object or comb through the latest performance metrics for different proxy libraries. As with most optimization, though, that's something to do after you've found that there's a performance problem, not (usually) before.

My Tortured Relationship With libnotes

May 22, 2021, 12:31 PM

Tags: c java

A tremendous amount of my work lately involves wrangling the core Notes library, variously "libnotes.dylib", "libnotes.so", or (for some reason) "nnotes.dll" (edit: see the comments!). I do almost all of my daily work in Liberty servers on the Mac loading this library, my first major use of the Domino Docker image was to use it as an overgrown carrier for this native piece, and in general I spend a lot of time putting this to use.

It ain't easy, though! Unlike a lot of native libraries that you can just kind of load from wherever, libnotes is extremely picky about its loading environment. There are a few main ways that this manifests.

Required-Library References

libnotes isn't standalone, and it doesn't just rely on standard OS stuff like libc. It also relies on other Notes-specific libraries like libxmlproc and libgsk8iccs, and some of those refer back to each other. This all shakes out in normal practice, since they're all next to the Notes/Domino executable and each other, but it makes things finicky when you're running from outside there.

This seems to be the most finicky on macOS, where the references to each library are marked with @executable_path relative paths, meaning relative to the running executable.

I've wrangled with this quite a bit over the years, but the upshot is that, on macOS and Linux, you really want to set some environment variables before you ever load your program. Naturally, since your running program can't do anything about what happened before it was loaded, this means you have to balance the teacups in your environment beforehand.

Non-Library Files

Beyond the dynamic libraries, libnotes also need some program-support executable files, things like string resource files and whatnot. And, very importantly, it needs an active data directory with an ID, notes.ini, and names.nsf at least. The data-directory contents bits are at least a little more forgiving: though there's still a hard requirement that they be present on the filesystem (since libnotes loads them by string path, not as configurable binary streams), you could at least bundle them or copy them around. For example, for my Dockerized app runners, I tend to have some basic versions of those in the repo that get copied into the Docker container's notesdata directory during build.

Working Around It

As I mentioned, the main way I go about dealing with this is by telling whatever is running my program to use applicable environment variables, ideally in a reproducible config-file-based way. This works perfectly with Docker and with tycho-surefire-plugin in Maven, but doesn't work so well for maven-surefire-plugin (the normal unit test runner) or Eclipse's JUnit tools. In Eclipse's case, I can at least fill in the environment variables in the Run Configuration. It hampers me a bit (I can't just right-click a test case and run it individually without first running the whole suite or setting up a configuration specially), but it works.

I gave a shot recently to copying the Mac dylibs and programmatically fiddling with otool and install_name_tool to adjust their dependency paths, and I got somewhere with that, but that somewhere was an in-libnotes fatal panic, so I'm clearly missing something. And besides, even if I got that working, it'd be a bit of a drag.

What Would Be Nice

What would be really nice would be a variant of this that's just more portable - something where I can just System.load a library, call NotesInitExtended to point to my INI and ID, and be good to go. I'm not really sure what this would entail, especially since I'd also want libinotes, libjnotes, and liblsxbe. I do know that I don't have the tools to do it, which fortunately frees me up to idly speculate about things I'd like to have delivered to me.

As long as I'm wishing for stuff, I'll say what would be even cooler would be a WebAssembly library usable with Wasmer, which is a multi-language WebAssembly runtime. The promise there is that you'd have a single compiled library that would run on any OS on any processor that supports WebAssembly, from now until forever. I'm not sure that this would actually be doable at the moment - for one, I don't know if callback parameters work, which would be fairly critical. Still, my birthday is coming up, and that would be a nice present.

More Notes on Filesystem and Charset Portability

May 18, 2021, 3:39 PM

Tags: java
  1. Java Hiccups
  2. Bitwise Operators
  3. Java Grab Bag 2
  4. Java Travelogue: The Care and Feeding of Locales
  5. More Notes on Filesystem and Charset Portability

A few months back, I talked about some localization troubles in the NSF ODP Tooling and how it's important to be explicit in your handling of this sort of thing to make sure your code will work in an environment that isn't specifically "Linux or macOS in an en-US environment".

Well, after making a bunch of little tweaks over the last few days, I have two additional tips in this arena! Specifically, my foes this round came from three sources: Windows, my use of a ZIP file filesystem, and the old reliable charset.

Path Separators

The first bit of trouble had to do with how those two things interact. For a long time, I've been in the (commonly-held) habit of using File.separator and File.separatorChar to get the default path separator for the system - that is, \ on Windows and / on most other platforms. Those work well enough - no real trouble there.

However, my problem came from using the Java NIO ZIP filesystem on Windows. Take this bit of code:

1
2
3
4
5
6
7
public static String toJavaClassName(Path path) {
	String name = path.toString();
	if(name.endsWith(".java")) {
		return name.substring(0, name.length()-".java".length()).replace(File.separatorChar, '.');
	}
	/* Other conditions here */
}	

When Path is a path on the local filesystem, that works just fine, taking a path like "com/example/Foo.java" and turning it into "com.example.Foo". It also works splendidly on macOS and Linux in all cases, the two systems I actually use. However, when path represents a path within a ZIP file and you're working on Windows, it fails, returning a "class name" like "com/example/Foo".

This is exactly what happens when compiling an ODP using a remote Domino server running on Windows. For the portability reasons mentioned in my previous post, the client sends a ZIP of the ODP to the server and then the compilation pulls directly out of that ZIP instead of writing it out to the filesystem. The way the ZIP filesystem driver in Java is written, it uses / for its path separator on all platforms, which is consistent with dealing with ZIP files generally. But, when mixed with the native filesystem separator, that line resolved to:

1
return "com/example/Foo".replace('\\', '.');

...and there's the problem. The fix is to change the code to instead get the directory separator from the contextual filesystem in question:

1
2
3
4
5
6
7
public static String toJavaClassName(Path path) {
	String name = path.toString();
	if(name.endsWith(".java")) {
		return name.substring(0, name.length()-".java".length()).replace(path.getFileSystem().getSeparator(), ".");
	}
	/* Other conditions here */
}

A little more verbose, sure, but it has the advantage of functioning consistently in all environments.

This also has significant implications if you use static properties to store filesystem-dependent elements. This came into play in my OnDiskProject class, which contains a bunch of path matchers to find design elements to import from the ODP. Originally, I kept these in a static property that was generated by writing them Unix-style, then running them through a generator to use the platform-native separator character. This had to change, since the actual ODP store may or may not be the platform-native filesystem. This sort of thing is pervasive, and it'll take me a bit to get over my long-standing habit.

Over-Interpreting Character Sets

This one is similar to the charset troubles in my previous post, but ran into subtle trouble in the ODP compiler. Here was the sequence of events:

  1. The ODP Compilers reads the XSP source of a page or custom control using ODPUtil, which read in the string as UTF-8
  2. It then passes that string to the Bazaar's DynamicXPageBean
  3. That method uses StringReader and an IBM Commons ReaderInputStream to read the content
  4. That content is then read in by FacesReader, which uses the default DOM parser to read the XML

In general, that flow worked just fine. However, that's because, in general, I write US-ASCII markup. However, when the page contains, say, Czech diacritics, this goes off the rails. Somewhere in the interpretation and re-interpretation of the file, the UTF-8-iness of it breaks.

Fortunately, this one was a clean one: XML has its own mechanism for declaring its encoding (and it's almost always UTF-8 anyway), so my code doesn't actually need to be responsible for interpreting the bytes of the file before it gets to the DOM parser. So I added a version of the Bazaar method that takes an InputStream directly and modified NSF ODP to use it, with no extra interpretation in between.

Implementing Custom Token-Based Auth on Liberty With Domino

Apr 24, 2021, 12:31 PM

This weekend, I decided to embark on a small personal side project: implementing an RSS sync server I can use with NetNewsWire. It's the delightful sort of side project where the stakes are low and so I feel no pressure to actually complete it (I already have what I want with iCloud-based syncing), but it's a great learning exercise.

Fair warning: this post is essentially a travelogue of not-currently-public code for an incomplete side app of mine, and not necessarily useful as a tutorial. I may make a proper example project out of these ideas one day, but for the moment I'm just excited about how smoothly this process has gone.

The Idea

NetNewsWire syncs with a number of services, and one of them is FreshRSS, a self-hosted sync tool that uses PHP backed by an RDBMS. The implementation doesn't matter, though: what matters is that that means that NNW has the ability to point at any server at an arbitrary URL implementing the same protocol.

As for the protocol itself, it turns out it's just the old Google Reader protocol. Like Rome, Reader rose, transformed the entire RSS ecosystem, and then crumbled, leaving its monuments across the landscape like scars. Many RSS sync services have stuck with that language ever since - it's a bit gangly, but it does the job fine, and it lowers the implementation toll on the clients.

So I figured I could find some adequate documentation and make a little webapp implementing it.

Authentication

My starting point (and all I've done so far) was to get authentication working. These servers mimic the (I assume antiquated) Google ClientLogin endpoint, where you POST "Email" and "Passwd" and get back a token in a weird little properties-ish format:

1
2
3
4
POST /accounts/ClientLogin HTTP/1.1
Content-Type: application/x-www-form-urlencoded

Email=ffooson&Passwd=secretpassword

Followed by:

1
2
3
4
5
6
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8

SID=null
LSID=null
Auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

The format of the "Auth" token doesn't matter, I gather. I originally saw it in that "name/token" pattern, but other cases are just a token. That makes sense, since there's no need for the client to parse it - it just needs to send it back. In practice, it shouldn't have any "=" in it, since NNW parses the format expecting only one "=", but otherwise it should be up to you. Specifically, it will send it along in future requests as the Authorization header:

1
2
GET /reader/api/0/stream/items/ids?n=1000&output=json&s=user/-/state/com.google/starred HTTP/1.1
Authorization: GoogleLogin auth=somename/8e6845e089457af25303abc6f53356eb60bdb5f8

This is pretty standard stuff for any number of authentication schemes: often it'll start with "Bearer" instead of "GoogleLogin", but the idea is the same.

Implementing This

So how would one go about implementing this? Well, fortunately, the Jakarta EE spec includes a Security API that allows you to abstract the specifics of how the container authenticates a user, providing custom user identity stores and authentication mechanisms instead of or in addition to the ones provided by the container itself. This is as distinct from a container like Domino, where the HTTP stack handles authentication for all apps, and the only way to extend how that works is by writing a native library with the C-based DSAPI. Possible, but cumbersome.

Identity Store

We'll start with the identity store. Often, a container will be configured with its own concept of what the pool of users is and how they can be authenticated. On Domino, that's generally the names.nsf plus anything configured in a Directory Assistance database. On Liberty or another JEE container, that might be a static user list, an LDAP server, or any number of other options. With the Security API, you can implement your own. I've been ferrying around classes that look like this for a couple of years now:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
/* snip */

import javax.security.enterprise.credential.Credential;
import javax.security.enterprise.credential.UsernamePasswordCredential;
import javax.security.enterprise.identitystore.CredentialValidationResult;
import javax.security.enterprise.identitystore.IdentityStore;

@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    @Inject AppConfig appConfig;

    @Override public int priority() { return 70; }
    @Override public Set<ValidationType> validationTypes() { return DEFAULT_VALIDATION_TYPES; }

    public CredentialValidationResult validate(UsernamePasswordCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentials(appConfig.getAuthServer(), credential.getCaller(), credential.getPasswordAsString());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }

    @Override
    public Set<String> getCallerGroups(CredentialValidationResult validationResult) {
        String dn = validationResult.getCallerDn();
        return getGroups(dn);
    }

    /* snip */
}

There's a lot going on here. To start with, the Security API goes hand-in-hand with CDI. That @ApplicationScoped annotation on the class means that this IdentityStore is an app-wide bean - Liberty picks up on that and registers it as a provider for authentication. The AppConfig is another CDI bean, this one housing the Domino server I want to authenticate against if not the local runtime (handy for development).

The IdentityStore interface definition does a little magic for identifying how to authenticate. The way it works is that the system uses objects that implement Credential, an extremely-generic interface to represent any sort of credential. When the default implementation is called, it looks through your implementation class for any methods that can handle the specific credential class that came in. You can see above that validate(UsernamePasswordCredential credential) isn't tagged with @Override - that's because it's not implementing an existing method. Instead, the core validate looks for other methods named validate to take the incoming class. UsernamePasswordCredential is one of the few stock ones that comes with the API and is how the container will likely ask for authentication if using e.g. HTTP Basic auth.

Here, I use some Domino API to check the username+password combination against the Domino directory and inform the caller whether the credentials match and, if so, what the user's distinguished name and group memberships are (with some implementation removed for clarity).

Token Authentication

That's all well and good, and will allow a user to log into the app with HTTP Basic authentication with a Domino username and password, but I'd also like the aforementioned GoogleLogin tokens to count as "real" users in the system.

To start doing that, I created a JAX-RS resource for the expected login URL:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@Path("accounts")
public class AccountsResource {
    @Inject TokenBean tokens;
    @Inject IdentityStore identityStore;

    @PermitAll
    @Path("ClientLogin")
    @POST
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    @Produces(MediaType.TEXT_HTML)
    public String post(@FormParam("Email") @NotEmpty String email, @FormParam("Passwd") String password) {
        CredentialValidationResult result = identityStore.validate(new UsernamePasswordCredential(email, password));
        switch(result.getStatus()) {
        case VALID:
            Token token = tokens.createToken(result.getCallerDn());
            String mangledDn = result.getCallerDn().replace('=', '_').replace('/', '_');
            return MessageFormat.format("SID=null\nLSID=null\nAuth={0}\n", mangledDn + "/" + token.token()); //$NON-NLS-1$ //$NON-NLS-2$
        default:
            // TODO find a better exception
            throw new RuntimeException("Invalid credentials");
        }
    }

}

Here, I make use of the IdentityStore implementation above to check the incoming username/password pair. Since I can @Inject it based on just the interface, the fact that it's authenticating against Domino isn't relevant, and this class can remain blissfully unaware of the actual user directory. All it needs to know is whether the credentials are good. In any event, if they are, it returns the weird little format in the response and the RSS client can then use it in the future.

The TokenBean class there is another custom CDI bean, and its job is to create and look up tokens in the storage NSF. The pertinent part is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {
    @Inject @AdminUser
    Database adminDatabase;

    public Token createToken(String userName) {
        Token token = new Token(UUID.randomUUID().toString().replace("-", ""), userName); //$NON-NLS-1$ //$NON-NLS-2$
        adminDatabase.createDocument()
            .replaceItemValue("Form", "Token") //$NON-NLS-1$ //$NON-NLS-2$
            .replaceItemValue("Token", token.token()) //$NON-NLS-1$
            .replaceItemValue("User", token.user()) //$NON-NLS-1$
            .save();
        return token;
    }

    /* snip */
}

Nothing too special there: it just creates a random token string value and saves it in a document. The token could be anything; I could have easily gone with the document's UNID, since it's basically the same sort of value.

I'll save the @Inject @AdminUser bit for another day, since we're already far enough into the CDI weeds here. Suffice it to say, it injects a Database object for the backing data DB for the designated admin user - basically, like opening the current DB with sessionAsSigner in XPages. The @AdminUser is a custom annotation in the app to convey this meaning.

Okay, so great, now we have a way for a client to log in with a username and password and get a token to then use in the future. That leaves the next step: having the app accept the token as an equivalent authentication for the user.

Intercepting the incoming request and analyzing the token is done via another Jakarta Security API interface: HttpAuthenticationMechanism. Creating a bean of this type allows you to look at an incoming request, see if it's part of your custom authentication, and handle it any way you want. In mine, I look for the "GoogleLogin" authorization header:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@ApplicationScoped
public class TokenAuthentication implements HttpAuthenticationMechanism {
    @Inject IdentityStore identityStore;
    
    @Override
    public AuthenticationStatus validateRequest(HttpServletRequest request, HttpServletResponse response,
            HttpMessageContext httpMessageContext) throws AuthenticationException {
        
        String authHeader = request.getHeader("Authorization"); //$NON-NLS-1$
        if(StringUtil.isNotEmpty(authHeader) && authHeader.startsWith(GoogleAccountTokenHandler.AUTH_PREFIX)) {
            CredentialValidationResult result = identityStore.validate(new GoogleAccountTokenHeaderCredential(authHeader));
            switch(result.getStatus()) {
            case VALID:
                httpMessageContext.notifyContainerAboutLogin(result);
                return AuthenticationStatus.SUCCESS;
            default:
                return AuthenticationStatus.SEND_FAILURE;
            }
        }
        
        return AuthenticationStatus.NOT_DONE;
    }

}

Here, I look for the "Authorization" header and, if it starts with "GoogleLogin auth=", then I parse it for the token, create an instance of an app-custom GoogleAccountTokenHeaderCredential object (implementing Credential) and ask the app's IdentityStore to authorize it.

Returning to the IdentityStore implementation, that meant adding another validate override:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class NotesDirectoryIdentityStore implements IdentityStore {
    /* snip */

    public CredentialValidationResult validate(GoogleAccountTokenHeaderCredential credential) {
        try {
            try(DominoClient client = DominoClientBuilder.newDominoClient().build()) {
                String dn = client.validateCredentialsWithToken(appConfig.getAuthServer(), credential.headerValue());
                return new CredentialValidationResult(null, dn, dn, dn, getGroups(dn));
            }
        } catch (NameNotFoundException e) {
            return CredentialValidationResult.NOT_VALIDATED_RESULT;
        } catch (AuthenticationException | AuthenticationNotSupportedException e) {
            return CredentialValidationResult.INVALID_RESULT;
        }
    }
}

This one looks similar to the UsernamePasswordCredential one above, but takes instances of my custom Credential class - automatically picked up by the default implementation. I decided to be a little extra-fancy here: the particular Domino API in question supports custom token-based authentication to look up a distinguished name, and I made use of that here. That takes us one level deeper:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class GoogleAccountTokenHandler implements CredentialValidationTokenHandler<String> {
    public static final String AUTH_PREFIX = "GoogleLogin auth="; //$NON-NLS-1$
    
    @Override
    public boolean canProcess(Object token) {
        if(token instanceof String authHeader) {
            return authHeader.startsWith(AUTH_PREFIX);
        }
        return false;
    }

    @Override
    public String getUserDn(String token, String serverName) throws NameNotFoundException, AuthenticationException, AuthenticationNotSupportedException {
        String userTokenPair = token.substring(AUTH_PREFIX.length());
        int slashIndex = userTokenPair.indexOf('/');
        if(slashIndex >= 0) {
            String tokenVal = userTokenPair.substring(slashIndex+1);
            Token authToken = CDI.current().select(TokenBean.class).get().getToken(tokenVal)
                .orElseThrow(() -> new AuthenticationException(MessageFormat.format("Unable to find token \"{0}\"", token)));
            return authToken.user();
        }
        throw new AuthenticationNotSupportedException("Malformed token");
    }

}

This is the Domino-specific one, inspired by the Jakarta Security API. I could also have done this lookup in the previous class, but this way allows me to reuse this same custom authentication in any API use.

Anyway, this class uses another method on TokenBean:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
@ApplicationScoped
public class TokenBean {    
    @Inject @AdminUser
    Database adminDatabase;

    /* snip */

    public Optional<Token> getToken(String tokenValue) {
        return adminDatabase.openCollection("Tokens") //$NON-NLS-1$
            .orElseThrow(() -> new IllegalStateException("Unable to open view \"Tokens\""))
            .query()
            .readColumnValues()
            .selectByKey(tokenValue, true)
            .firstEntry()
            .map(entry -> new Token(entry.get("Token", String.class, ""), entry.get("User", String.class, ""))); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$ //$NON-NLS-4$
    }
}

There, it looks up the requested token in the "Tokens" view and, if present, returns a record indicating that token and the user it was created for. The latter is then returned by the above Domino-custom GoogleAccountTokenHandler as the authoritative validated user. In turn, the JEE NotesDirectoryIdentityStore considers the credential validation successful and returns it back to the auth mechanism. Finally, the TokenAuthentication up there sees the successful validation and notifies the container about the user that the token mapped to.

Summary

So that turned into something of a long walk at the end there, but the result is really neat: as far as my app is concerned, the "GoogleLogin" tokens - as looked up in an NSF - are just as good as username/password authentication. Anything that calls httpServletRequest.getUserPrincipal() will see the username from the token, and I also use this result to spawn the Domino session object for each request.

Once all these pieces are in place, none of the rest of the app has to have any knowledge of it at all. When I implement the API to return the actual RSS feed entries, I'll be able to just use the current user, knowing that it's guaranteed to be properly handled by the rest of the system beforehand.

Bonus: Java 16

This last bit isn't really related to the above, but I just want to gush a bit about newer techs. My plan is to deploy this app using my Open Liberty Runtime, which means I can use any Open Liberty and Java version I want. Java 16 came out recently, so I figured I'd give that a shot. Though I don't think Liberty is officially supported on it yet, it's worked out just fine for my needs so far.

This lets me use the features that have come into Java in the last few years, a couple of which moved from experimental/incubating into finalized forms in 16 specifically. For example, I can use records, a specialized type of Java class intended for immutable data. Token is a perfect case for this:

1
2
public record Token(String token, String user) {
}

That's the entirety of the class. Because it's a record, it gets a constructor with those two properties, plus accessor methods named after the properties (as used in the examples above). Neat!

Another handy new feature is pattern matching for instanceof. This allows you to simplify the common idiom where you check if an object is a particular type, then cast it to that type afterwards to do something. With this new syntax, you can compress that into the actual test, as seen above:

1
2
3
4
5
6
7
@Override
public boolean canProcess(Object token) {
    if(token instanceof String authHeader) {
        return authHeader.startsWith(AUTH_PREFIX);
    }
    return false;
}

Using this allows me to check the incoming value's type while also immediately creating a variable to treat it as such. It's essentially the same thing you could do before, but cleaner and more explicit now. There's more of this kind of thing on the way, and I'm looking forward to the future additions eagerly.

The Joyful Utility of Optionals in Java

Apr 23, 2021, 11:11 AM

Tags: java
  1. The Cleansing Flame of Null Analysis
  2. Quick Tip: JDK Null Annotations for Eclipse
  3. The Joyful Utility of Optionals in Java

A while back, I talked about how I had embraced nullness annotations in several of my projects. However, that post predated Domino's laggardly move to Java 8 and so didn't discuss one of the tools that came to Java core in that version: java.util.Optional.

The Concept

Java's Optional is a passable implementation of the Option type concept that's been floating around programming circles for a good long time. It's really come to the fore with the proliferation of large-platform languages like Swift and Kotlin that have the concept built in to the syntax.

Java's implementation doesn't go that far - there's no special syntax for them, at least not yet - but the concept remains the same. The idea is that, when you embrace Optional use, your code will no longer return null, with the goal of cutting down on the pernicious NullPointerException. While you may still return an empty value, you will be doing so in a way that allows (and forces) the downstream programmer to check for that case more cleanly and adapt it into their code.

In Practice

For an example of where this sort of thing is well-suited, take a look at this snippet of code, which is likely to be pretty universal in Domino code:

1
2
3
4
5
View users = db.getView("Users");
Document user = users.getDocumentByKey(userName, true);
if(user != null) {
	// ...
}

Most of the time, this will run fine. However, if, say, the "Users" view is unavailable (if it was replaced in a design refresh, or another developer removed it, or it's reader-inaccessible to the current user), you'll end up with a NullPointerException in the second line. When you have the code in front of you, the problem is obvious quickly, but that will require you to crack open the app and look into what's going on before you can even start actually fixing the trouble. That's also the "good" version of the case - if you're using code that separates the #getView from the call to #getDocumentByKey with a bunch of other code, it'll be harder to track down.

Imagine instead if the Domino API used Optional, and returned an Optional<View> in #getView and similar for #getDocumentByKey. That could look more like this:

1
2
3
4
5
View users = db.getView("Users")
	.orElseThrow(() -> new IllegalStateException("Unable to open view \"Users\""));
users.getDocumentByKey(userName, true).ifPresent(user -> {
	// ...
});

The idea is the same, and you'll still get an exception if "Users" is unavailable, but it will be immediately obvious in the error message what it is that you need to fix.

This also forces the programmer to conceptualize that case in a way that they wouldn't necessarily have without the need to "unwrap" the Optional. Maybe it's actually okay if "Users" doesn't exist - in that case, you could just return early and not even run the risk of an exception at all. Or maybe there's a way to recover from that - maybe look up the user another way, or create the view on the fly.

Implementing It

When spreading Optional across an existing codebase or writing a new one around it, I've found that there are some important things to keep in mind.

First, since Optional is implemented as just a class and not special syntax, I've found that the best way to implement it in your code is to go all or nothing: if you decide you want to use Optional, do it everywhere. The trouble if you mix-and-match is that you'll run into some cases where you still do if(foo != null) { ... }; since an empty Optional is non-null, that habit will bite you.

Usually, though, that's not too much trouble: when you start to change your code, you'll run into tons of type-related problems around code like that, so you'll be cued in to change it while you're working anyway. Just make sure to not leave yourself null-returning method traps elsewhere.

Another fun gotcha you'll hit early is that Optional.of(foo) will throw a NullPointerException if foo is null. That's the JDK being (reasonably) pedantic: if you want to wrap a potentially-null value, you have to instead do Optional.ofNullable(foo). While irksome at first, it drives home the point that one of the virtues of Optional is that it forces you, the programmer, to consider the null case much more than you did previously.

Unwrapping

Optional also provides a number of ways to "unwrap" the value or deal with it, and it's useful to know about them for different situations.

The first one is just someOptional.get(), which will return the contained value or immediately throw a NullPointerException if it's null. I've found that this is best for when you're very confident that the value is non-null, either because you already checked with isPresent or if it being null is a sign that the system is so fubar already that there's no virtue in even customizing the exception.

Somewhat safer than that, though, is what I had above: someOptional.orElseThrow(() -> ...), which will either return the wrapped value or throw a customized exception if it's null. This is ideal for either halting execution with a message for how the developer/admin can fix it or for throwing a useful exception declared in your documentation for a downstream programmer to catch.

There's also someOptional.orElse(someOtherValue). For example, take this case where you have a configuration API that returns an Optional<String> for a given lookup key:

1
String emailTarget = getConfig("EmailTarget").orElse("admin@company.com");

That's essentially the "get-or-default" idiom. Now, you can actually do .orElse(null) if you want, though that's often not the best idea. Still, that can be handy if you're adapting existing code that does a null check immediately already, or if you're writing to another existing API that does use null.

Optional also has a #map method, which may seem a bit weird at first, but can be useful on its own, and is particularly well-suited to use with results from the Stream API like #findFirst. It lets you transform a non-null value into something else and thereby change your Optional to then be an Optional of whatever the transformed value type is. For example:

1
2
3
String userName = findLoggedInUser()
	.map(UserObj::getUserName)
	.orElse("Anonymous");

In this case, findLoggedInUser returns a Optional<UserObj>. Then, the #map call gets the username if present and returns an Optional<String>, which is thereby either unwrapped or turned into "Anonymous".

Optional As A Parameter

Up until this point, I've been talking about Optional as a method return value, but what about using it as a parameter? Well, you can, but the consensus is that you probably shouldn't.

At first, I chafed against this advice - after all, Optional is finally an in-JDK way to express an optional or nullable parameter, so why not use it? The arguments about how it's less efficient for the compiler, while true, didn't sway me much - after all, compilers improve, and it's generally better to do something correct than something microscopically faster.

However, the real thing that convinced me was realizing that, if you have Optional as a method parameter, you're still going to have to null-check it anyway, since you don't control who might be calling your code. So, not only will you have to check and unwrap the Optional, you'll still have to have a null guard for the Optional itself, defeating much of the point.

I do think that there's some utility in Optional parameters in your own in-implementation code, not exposed to the outside. It can be useful to indicate that a parameter value is intended to be ignored outright, for example. As one comment on that SO thread mentions, an Optional parameter allows for three states: null, an empty value, or a present value. You could use null to indicate that you don't want that value checked at all, and then have an empty value mean something particular in the code. But that kind of fiddly hair-splitting is exactly why it should be of limited use and even then very-clearly documented for yourself.

If Java ever gets syntax sugar for Optional (say, declaring the parameter as Object? and having calls passing null auto-wrap them into Optional or something), then this could change.

Interaction With Null Analysis

Finally, I'll mention how using Optional interacts with null annotation analysis.

To begin with, if you're using Eclipse, the immediate answer will be "poorly". Because Eclipse doesn't ship with nullness hints for the core JDK, it won't know that Optional.of returns a non-null value, defeating the entire point. For that, you'll want the lastNPE.org nullness annotations, which will provide such hints. I've found that they're still not perfect here, causing Eclipse to frequently complain about someOptional.orElse(null), but the experience becomes good enough.

IntelliJ, for the record, has such hints built-in, so you don't need to worry there.

Once you have that sorted out, though, they go together really well. For example, pairing the two can help you find cases where, in your Optional translation journey, you're checking whether the Optional itself is null: with null-annotated code, the compiler can see that it will never be null as such and will tell you to change it.

So I advise pairing the two: use Optional for return values everywhere, especially if you're making an API for downstream consumption, and then pair them with null annotations to make sure your own implementation code is correct and to provide hints for opting-in users.

Using Server-Sent Events on Domino

Mar 30, 2021, 8:57 AM

Tags: jakartaee java

Though Domino's HTTP stack infamously doesn't support WebSocket, WebSocket isn't the only game in town when it comes to getting push-type information to HTTP clients. HTML5 also brought with it the less-famous Server-Sent Events standard, which is basically half of WebSocket: it allows the server to push events to the client, but it's still a one-way communication channel.

The Standard

The technique that SSE uses is almost ludicrously simple: the client makes a request and the server replies that it will provide text/event-stream content and keeps the connection open. Then, it starts emitting events delimited by blank lines:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
HTTP/1.1 200 OK
Content-Type: text/event-stream;charset=UTF-8



event: timeline
data: hello

event: timeline
data: hello

Unlike WebSocket, there's no Upgrade header, no two-way communication, and thereby no special requirements on the server. It's so simple that you don't even really need a server-side library to use it, though it still helps.

In Practice

I've found that, though SSE is intentionally far less capable than WebSocket, it actually provides what I want in almost all cases: the client can receive messages instantaneously from the server, while the server can receive messages from the client by traditional means like POST requests. Though this is less efficient and flexible than WebSocket, it suits perfectly the needs of apps like server monitors, chat rooms, and so forth.

Using SSE on Domino

JAX-RS, the Java REST service framework, provides a mechanism for working with server-sent events pretty nicely. Baeldung, as usual, has a splendid tutorial covering the API, and a chunk of what I say here will be essentially rehashing that.

However, though Domino ships with JAX-RS by way of the ExtLib, the library only implements JAX-RS 1.x, which predates SSE support. Fortunately, newer JAX-RS implementations work pretty well on Domino, as long as you bring them in in a compatible way. In my XPages Jakarta EE Support project, I did this by way of RESTEasy, and there did the legwork to make it work in Domino's OSGi environment. For our example today, though, I'm going to skip that and build a small webapp using the com.ibm.pvc.webcontainer.application extension point. In theory, this should also work XPages-side with my project, though I haven't tested that; it might require messing with the Servlet response cache.

The Example

I've uploaded my example to GitHub, so the code is available there. I've aimed to make it pretty simple, though there's always some extra scaffolding to get this stuff working on Domino. The bulk of the "pom.xml" file is devoted to two main things: packaging an app as an OSGi bundle (with RESTEasy embedded) and generating an update site with site.xml to import into Domino.

Server Side

The real work happens in TimeStreamResource, the JAX-RS resource that manages client connections and also, in this case, happens to emit the messages as well.

This resource, when constructed, spawns two threads. The first one monitors a BlockingQueue for new messages and passes them along to the SseBroadcaster:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
try {
    String message;
    while((message = messageQueue.take()) != null) {
        // The producer below may send a message before setSse is called the first time
        if(this.sseBroadcaster != null) {
            this.sseBroadcaster.broadcast(this.sse.newEvent("timeline", message)); //$NON-NLS-1$
        }
    }
} catch(InterruptedException e) {
    // Then we're shutting down
} finally {
    this.sseBroadcaster.close();
}

Here, I'm using the Sse#newEvent convenience method to send a basic text message. In practice, you'll likely want to use the builder you get from Sse#newEventBuilder to construct more-complicated events with IDs and structured data types (usually JSON).

A BlockingQueue implementation (such as LinkedBlockingDeque) is ideal for this task, as it provides a simple API to add objects to the queue and then wait for new ones to arrive.

The second one emits a new message every 10 seconds. This is just for the example's sake, and would normally be actually looking something up or would itself be a listener for events it would like to broadcast.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
try {
    while(true) {
        String eventContent = "- At the tone, the Domino time will be " + OffsetDateTime.now();
        messageQueue.offer(eventContent);

        // Note: any sleeping should be short enough that it doesn't block HTTP restart
        TimeUnit.SECONDS.sleep(10);
    }
} catch(InterruptedException e) {
    // Then we're shutting down
}

Browsers can register as listeners just by issuing a GET request to the API endpoint:

1
2
3
4
5
@GET
@Produces(MediaType.SERVER_SENT_EVENTS)
public void get(@Context SseEventSink sseEventSink) {
    this.sseBroadcaster.register(sseEventSink);
}

That will register them as an available listener when broadcast events are sent out.

Additionally, to simulate something like a chat room, I added a POST endpoint to send new messages beyond the periodic ten-second broadcast:

1
2
3
4
5
6
@POST
@Produces(MediaType.TEXT_PLAIN)
public String sendMessage(String message) throws InterruptedException {
    messageQueue.offer(message);
    return "Received message";
}

That's really what there is to it as far as "business logic" goes. There's some scaffolding in the Servlet implementation to get RestEasy working nicely and manage the ExecutorService and the obligatory "plugin.xml" to register the app with Domino and "web.xml" to account for Domino's old Servlet spec, but that's about it.

Client Side

On the client side, everything you need is built into every modern browser. In fact, the bulk of "index.html" is CSS and basic HTML. The JavaScript involved in blessedly slight:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
function sendMessage() {
    const cmd = document.getElementById("message").value;
    document.getElementById("message").value = "";
    fetch("api/time", {
        method: "POST",
        body: cmd
    });
    return false;
}
function appendLogLine(line) {
    const output = document.getElementById("output");
    output.innerText += line + "\n";
    output.scrollTop = output.scrollHeight;
}
function subscribe() {
    const eventSource = new EventSource("api/time");
    eventSource.addEventListener("timeline",  (event) => {
        appendLogLine(event.data);
    });
    eventSource.onerror = function (err) {
        console.error("EventSource failed:", err);
    };
}

window.addEventListener("load", () => subscribe());

The EventSource object is the core of it and is a standard browser component. You give it a path to watch and then listen for events and errors. fetch is also standard and is a much-nicer API for dealing with HTTP requests. In a real app, things might get a bit more complicated if you want to pass along credentials and the like, but this is really it.

Gotchas

The biggest thing to keep in mind when working with this is that you have to be very careful to not block Domino's HTTP task from restarting. If you don't keep everything in an ExecutorService and account for InterruptedExceptions as I do here, you're highly likely to run into a situation where a thread will keep chugging along indefinitely, leading to the dreaded "waiting for session to finish" loop. The ExecutorService's shutdownNow method helps you manage this - as long as your threads have escape hatches for the InterruptedException they'll receive, you should be good.

I also, admittedly, have not yet tested this at scale. I've tried it out here and there for clients, but haven't pulled the trigger on actually shipping anything with it. It should work fine, since it's using standard JAX-RS stuff, but there's always the chance that, say, the broadcaster registry will fill up with never-ending requests and will eventually bloat up. The stack should handle that properly, but you never know.

Beyond any worries about the web container, it's also just a minefield of potential threading and duplicated-work trouble. For example, when I first wrote the example, I found that messages weren't shared, and then that the time messages could get doubled up. That's because JAX-RS, by default, creates a new instance of the resource class for each request. Moving the declaration from the Application class's getClasses() method (which creates new objects) to getSingletons() (which reuses single objects) fixed the first problem. After that, I found that the setSse method was called multiple times even for the singleton, and so I moved the thread spawning to the constructor to ensure that they're only launched once.

Once you have the threading sorted out, though, this ends up being a pretty-practical path to accomplishing the bulk of what you would normally do with WebSocket, even with an aging HTTP stack like Domino's.

Domino HttpService and the NSF Router Project

Mar 18, 2021, 3:27 PM

Tags: domino java

In my last post and its predecessor, I talked about my tinkering at the XspCmdManager level of Domino's HTTP stack and then more specifically about the com.ibm.designer.runtime.domino.adapter.HttpService class.

The Stack

Now, HttpService is about as generic a name as you can get for this sort of thing, and it doesn't really tell you what it represents. You can think of Domino's HTTP stack since at least the 8.5 era as having two cooperating parts: the core native portion that handles HTTP requests in basically the same way as Domino always did, plus the Java layer as organized by XspCmdManager. The Java layer gets "right of first refusal" for any incoming request that wasn't handled by a DSAPI plugin: before routing the request to the legacy HTTP code, Domino asks XspCmdManager if it'd like to handle it, and only takes care of it at the native layer if Java says no.

XspCmdManager on its own doesn't do much. It accepts the JNI calls from the native side, but otherwise quickly passes the buck to LCDEnvironment (I assume the "LCD" here stands for "Lotus Component Designer"). LCDEnvironment, in turn, really just aggregates registered handlers and dispatches requests. It does a little work to handle exception cases more cleanly than XspCmdManager would, but it's mostly just a dispatcher.

The things that it dispatches to, though, are the HttpServices. These are registered by using the com.ibm.xsp.adapter.serviceFactory IBM Commons extension point, such as here in the plugin.xml form:

1
2
3
<extension point="com.ibm.commons.Extension">
  <service type="com.ibm.xsp.adapter.serviceFactory" class="org.openntf.nsfrouter.NSFRouterServiceFactory" />
</extension>

The class you register there is an implementation of IServiceFactory, which supplies zero or more HttpService implementations on request.

As a side note, I've been using this extension point for years and years, but never before to actually handle HTTP requests. It's extremely convenient in that it's something you can register that is loaded up immediately when the HTTP task starts and is notified as it's terminating, giving you a useful lifecycle without having to wait for a request to come in. I learned about it from the OpenNTF Domino API team and it's been a regular part of my toolkit since.

The HttpService

So that brings us to the HttpService implementation classes themselves. Once LCDEnvironment has gathered them all together, it asks each one in turn (via #isXspUrl) if it can handle a given URL. If any of them say that they can, then it calls the #doService method on each in turn (based on the #getPriority method's return value) until one says that it handled it.

There are a few main HttpService implementations in action on Domino:

  • com.ibm.domino.xsp.module.nsf.NSFService, which handles in-NSF XPages and resources
  • com.ibm.domino.xsp.adapter.osgi.OSGIService, which handles OSGi-registered servlets and webapps
  • com.ibm.domino.xsp.module.nsf.StaticResourcesService, which helps serve static resources

These services also tend to go another layer deeper, passing actual requests off to ComponentModule implementations like NSFComponentModule. That's beyond the scope of what I'm talking about today, but it's interesting to see just how much the Domino stack is basically one giant webapp that contains progressively smaller bounded webapps, like a Matryoshka doll.

For those keeping track, we're about here on a typical XPages call stack:

     at com.ibm.domino.xsp.module.nsf.NSFComponentModule.doService(NSFComponentModule.java:1336)
     at com.ibm.domino.xsp.module.nsf.NSFService.doServiceInternal(NSFService.java:662)
     at com.ibm.domino.xsp.module.nsf.NSFService.doService(NSFService.java:482)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:357)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:313)
     at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)

For our purposes this week, the #isXspUrl and #doService methods on HttpService are our stopping points.

NSF Router Service

In a Twitter conversation yesterday, Per Lausten gave me the idea of using this low level of access to implement improved in-NSF routing. That is to say, if you want "foo.nsf/some/nice/url/here" to actually load up "index.xsp?path=nice/url/here" or the like. Generally, if you want to do this, you either have to set up Web Site rules in names.nsf or settle for next-best options like "index.xsp/nice/url/here".

Since an HttpService comes in at a low-enough level to tackle this, though, it's entirely doable to improve this situation there. So, this morning, I did just that. This new project is a pretty simple one, with all of the action going on in one class.

The way it works is that it looks for a ".nsf" URL and, when it finds one, attempts to load a file or classpath resource named "nsfrouter.properties". The contents of this is a Java Properties file enumerating regex-based routing you'd like. For example:

1
2
foo/(\\w+)=somepage.xsp?bit=bar
baz=somepage.xsp

When found, the class loads up the rules and then uses them to check incoming URLs.

The #doService method then picks up that URL, does a String#replaceAll call to map it to the target, and then redirects the browser over:

NSF Router in action

The user still ends up at the "uglier" URL, but that's the safest way to do it without breaking on-page references.

I felt like that was a neat little exercise, and one that's not only potentially useful on its own but also serves as a good way to play around with these somewhat-lower-level Domino components.

Carving Out A Workspace On Apple Silicon

Feb 17, 2021, 11:24 AM

Last month, I mentioned my particular computer trouble, in that my trusty iMac Pro has been afflicted by an ever-worsening fan noise problem. I'd just been toughing it out, since there's never a good time to lose your main machine for a week or two, and my traveler MacBook Escape wasn't up to the task of being a full replacement.

After about a month's delay, my fresh new M1 MacBook Air arrived a few weeks ago and I've been putting it through its paces.

The Basics

As pretty much anyone who has one of these computers has said, the performance is outstanding. For the most part, even with emulation, most of the tasks I do during the day feel the same as they did on my wildly-more-expensive iMac Pro. On top of that, the fact that this thing doesn't even have a fan is both a technical marvel and a godsend as far as ambient room noise is concerned.

For continuity's sake, I used Migration Assistant to bring over my iMac's environment, and everything there went swimmingly. The good-citizen apps I use like MarsEdit and Tower were already ported to ARM, while the laggards (unsurprisingly, the ones made by larger companies with more resources) remain Intel-only but run just fine in emulation.

Hardware

For a good while now, I've had the iMac screen flanked by a pair of similarly-sized but far-inferior Asus screens. With the iMac's lovely screen out of the setup for now, I've switched to using those two Asus screens as my primary ones, with the pretty-but-tiny laptop screen sitting beneath them. It works well enough, though I do miss the retina resolution and general brightness of the iMac.

The second external screen itself was a bit of an issue. Of themselves, these M1 Macs, either for good reason or to mark them as low end, support only two screens total, the laptop screen included. So I ended up ordering one of the StarTech DisplayLink adapters. I expected it to be a crappy experience overall, with noticeable lag, but it actually works much more smoothly than I'd have expected. Other than the fact that it doesn't support Night Shift and some wake-from-sleep slowness that I attribute to it, it actually feels just like a normally-attached monitor.

I also, in order to regain my precious Ethernet connection and (sort of) clean up the dongle situation, I got one of these Anker USB-C docks. I've only had it for a day, but it seems to be working like you'd want so far. So that's nice.

Eclipse and Java

Here's where I've hit my first bout of jankiness, though it's not too surprising. In general, Eclipse and Java work just fine through emulation, and I can even keep running tests and web servers using the libnotes.dylib from the Notes client as I want.

I've found times where tests lag or fail now when they didn't before, though, and that's a little ominous. Compiling locally with NSF ODP, which spawns a sub-process that loads the Notes libraries, usually works, though now I've set up another Domino server on my network to handle that reliably.

I've also noticed some trouble in one of my Eclipse workspaces where it periodically spends a long time (10+ minutes) "Building" without explaining what exactly it's doing, and this is new behavior since the switch. I can't say what the core trouble is there. It's my largest active workspace, so it could be that file polling or other system-call-intensive work is just slower, or it could be an artifact of moving it from machine to machine. I'll probably scrap it and make a new workspace with the same projects to see if it alleviates it.

This all should improve in time, though, when Eclipse, AdoptOpenJDK, and HCL all release macOS ARM ports. IntelliJ has an experimental ARM port out, and I'm curious how that does its thing. I'll probably spend some time kicking the tires on that, though I still find Eclipse's UI much more conducive to the "lots of semi-related projects" working style I have. Visual Studio Code is in a similar boat, so that'll be good for the JavaScript development I do (under protest).

In the mean time, I've done some tinkering with how I could get a fully-native Eclipse environment running and showing up on my Mac, including firing up the venerable XQuartz to run Eclipse as an X client from a Linux VM in the basement. While that technically works, the experience is... well, I'll charitably call it "not Mac-like". Still, it's kind of neat and would in theory push aside any number of concerns.

Docker

Here's the real trouble I'm butting my head against. I've taken to using Docker more and more for various reasons: running app servers with a Domino runtime, running Domino outright, and (where my trouble is now) performing cross-compilation and other native-specific compilation tasks. For example, for one of my clients, I have a script that mounts the project directory to a Docker container to perform a full Maven build with NSF compilation and compile-time tests, without having to worry about the user's particular Notes or Domino installation.

However, while Docker is doing Hurculean work to smooth the process, most of the work I do ends up hitting one of the crashing snags in poor qemu, which crop up particularly with Java compilation tasks. Since compiling Java is basically all I do all day, that leaves me hoping either for improvements in future versions or a Linux/aarch64 port of Domino (or at least libnotes.so).

In the mean time, I'm making use of Docker's network transparency to run Docker on an x64 VM and set DOCKER_HOST locally to point to it. For about half of what I need, this works great: I can run Domino servers and Notes-enabled webapps this way, and I just change which address I'm pointing to to interact with them. However, it naturally removes the possibility of connecting with the local filesystem, at least without pairing it with some file-share jankiness, so it's not a replacement all around. It also topples quickly into the bizarre inner Docker world: for example, I wanted to set up Codewind to work remotely, but the instructions I found for getting started with your own server were not helpful.

Future Use

Still, despite the warts, I'd say this laptop is performing admirably, and better than one would normally expect. Plus, it's a useful exercise in finding more ways to make my workflow less machine-specific. Though I still bristle at the thought of going full Eclipse Che and working out of a web browser, at least moving some more aspects of my workspace to float above the rough seas is just good practice.

I'll probably go back to using the iMac Pro as my main machine once I get it fixed, even if only for the display, but this humble, low-end M1 has planted its flag more firmly than a MacBook Air normally has any right to.

Java Travelogue: The Care and Feeding of Locales

Feb 14, 2021, 1:37 PM

Tags: java
  1. Java Hiccups
  2. Bitwise Operators
  3. Java Grab Bag 2
  4. Java Travelogue: The Care and Feeding of Locales
  5. More Notes on Filesystem and Charset Portability

Over time, people using the NSF ODP Tooling project have periodically hit troubles with files using non-ASCII filenames, as well as some related encoding issues.

Now, I know what you're thinking: why don't people hitting this trouble just be Americans and not use languages with accents? And yes, obviously, that's the optimal solution. However, given that, apparently, most people on the planet are not American, it's for the best to not write software that completely falls apart when encountering an umlaut.

When working to fix this, I found some areas where the fix was pretty obvious, and others where the trouble was a bit more insidious. I figure it'll be potentially useful to write these down, either for others running into similar trouble or my own future self next time I write overly-American code.

Early Encounters: ZIP Files

The earliest place people encountered trouble was with the handling of ZIP files when transferring packages around. When compiling remotely, the local Maven plugin ZIPs up the ODP and related support files (OSGi sites, etc.) for transfer to the server, which then unzips them. This led to a problem wherein the handling of file names in ZIP files is wildly inconsistent over platforms and locales.

Fortunately, this one has a clean fix: when using ZipOutputStream and ZipInputStream (which were my preferred mechanisms), you can specify your encoding:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
try(OutputStream fos = Files.newOutputStream(packageZip, StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING)) {
    try(ZipOutputStream zos = new ZipOutputStream(fos, StandardCharsets.UTF_8)) {
        // Add entries to the ZIP here
    }
}

// And to read:
try(InputStream is = Files.newInputStream(zipFilePath)) {
    try(ZipInputStream zis = new ZipInputStream(is, StandardCharsets.UTF_8)) {
        // Iterate over entries here
    }
}

Since I control both sides of the operation in this case, I can then be confident that it will use UTF-8 across the board.

Next Problem: Filesystem Restrictions

The next problem I ran into actually happened when I was setting up a compiler server in a Docker container. One of the design elements in the example projects is an agent containing umlauts, based on a reported problem. When I tried compiling this project in a Docker-housed Domino server, I ran into this trouble:

java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters: Code/Agents/Example Agent with ref?r?ns.fa
    at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
    at sun.nio.fs.UnixPath.<init>(UnixPath.java:71)
    at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
    at sun.nio.fs.AbstractPath.resolve(AbstractPath.java:53)
    at org.openntf.nsfodp.compiler.servlet.ODPCompilerServlet.expandZip(ODPCompilerServlet.java:241)

Basically, it was trying to write out what it considered an illegal filename and choked on it.

I first spent some time double-checking my ZIP handling, since I was assuming that the trouble was that the name it got out of the ZIP file was corrupted, hence the "?" instead of "ë". This search brought me to this Stack Overflow question, which is asking about the same exception and which talks about the locale of the underlying system. The gist of it is that Java uses a semi-standard property (sun.jnu.encoding) to interpret a lot of things, filename mapping included, and it derives this from the system locale.

I hopped into the Domino container to see what locale it uses (by way of echo $LANG) and saw that it's "C.utf8". I like the sound of that "utf8" part, but the "C" part is different from the comfy "en_US" that I'm used to, and likely causes Java to be more restrictive. Uncharacteristically, the typical "en_US" setup actually avoids this trouble, causing Java NIO to allow all sorts of characters in filenames.

So I started seeing what I could do by way of setting ENV variables as part of the Dockerfile, but then realized that it'd be better to fix this in a way that doesn't depend on external configuration like that.

Java NIO

Here I realized that I didn't actually need to write these files out to the filesystem at all. Over a year ago, I wrote part 1 of an unfinished series talking about the Java NIO filesystem API from Java 7. That API exists for a number of reasons, and the best way to dive into it is to replace your uses of java.io.File, java.io.FileInputStream, etc. with it, which I did in the NSF ODP Tooling a while ago.

What struck me, then, was that this earlier work also separated out the specifics of filesystem access. And, critically, Java ships with a ZIP file system provider that lets you point at a ZIP or JAR file and treat it like any old filesystem. The on-disk project representation I wrote for the compiler uses this NIO API as its entrypoint. By skipping the step of extracting the ODP from the ZIP to the filesystem, I could remove that entire problem from my view.

The Fiddly Parts

This process was mostly smooth, but there are a few fiddly parts that I had to account for:

  1. You have to use newFileSystem when you crack open a ZIP this way, rather than trying to open it by "jar:file" URL directly. Additionally, you have to pass a Map of options including "create":"true" to make it work.
  2. Paths.get, which is a common mechanism for creating either a full or relative path, is a bit insidious. Since those paths are created using the default system filesystem, you can't just pass them to methods like resolve for paths created from another filesystem type. Accordingly, I replaced uses of that with methods based on a context filesystem.
  3. Nested ZIPs aren't supported. That is, they exist like other files in there, but you can't reach further inside of them with a "jar:jar:file" URL. So, when building the classpath for compilation, I have to extract them. I suppose this part is technically a bug if those files have non-ASCII names, but that's rare enough to hopefully not be an issue.

Once I dealt with those, though, things went surprisingly smoothly. I even refactored earlier code to use this, replacing more-complicated streaming logic with conceptually-simpler file-copying logic. My guess is that this new route is slower, but the difference is negligible for my needs, so I'll take the higher abstraction here.

Stream Locales

Unfortunately, while that helped a bit and is definitely conceptually neat, it didn't solve all my trouble. If I recall correctly, at this point, I was able to get the file imported, but the agent name itself was mangled in Notes, something that didn't happen when I compiled it locally.

This brought me to looking into locales used when reading and writing XML from the ZIP or filesystem. Hypothetically, I had done this cleanly. My file-reading utility methods were very similar, just opening up an InputStream (which is too low-level to care about encoding) and passing it along to IBM Commons utilities to interpret it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public static String readFile(Path path) {
    try(InputStream is = Files.newInputStream(path)) {
        return StreamUtil.readString(is);
    } catch(IOException e) {
        throw new RuntimeException(e);
    }
}

public static Document readXml(Path file) {
    try(InputStream is = Files.newInputStream(file)) {
        return DOMUtil.createDocument(is);
    } catch(IOException | XMLException e) {
        throw new RuntimeException(e);
    }
}

However, I realized that these were insidious traps, too. By not handling encoding on my side, I was leaving it up to the internals to pick a default encoding, which isn't guaranteed to be UTF-8 (even though it really should be for XML). StreamUtil.readString there has a variant that takes an encoding as the second argument, but I decided to instead handle this one step earlier. Rather than using InputStream, which deals with bytes directly, I decided to switch to Readers, which are more specialized for dealing with character sequences. The Files class provides methods to do this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public static String readFile(Path path) {
    try(Reader r = Files.newBufferedReader(path, StandardCharsets.UTF_8)) {
        return StreamUtil.readString(r);
    } catch(IOException e) {
        throw new RuntimeException(e);
    }
}

public static Document readXml(Path file) {
    try(Reader r = Files.newBufferedReader(file, StandardCharsets.UTF_8)) {
        return DOMUtil.createDocument(r);
    } catch(IOException | XMLException e) {
        throw new RuntimeException(e);
    }
}

This way, it's explicit what I'm doing, and it allows for extra optimization at the NIO level if possible.

Writing Back Out

These rules also applied to writing back out. For the most part, Files.newBufferedWriter(..., StandardCharsets.UTF_8) was the way to go, though I did find one extra insidious bit:

1
2
3
try(PrintWriter writer = new PrintWriter(os)) {
    // ...
}

Here, PrintWriter doesn't have a character-set argument at all, and so one could be forgiven (hopefully) for just kind of assuming it'll use Unicode. However, delving into the implementation, it uses OutputStreamWriter's no-charset constructor, which in turn calls Charset.defaultCharset(), and there's your potential bug. Since I didn't actually need PrintWriter as such, I replaced this with a charset-specific call and all was well:

1
2
3
try(Writer writer = new OutputStreamWriter(os, StandardCharsets.UTF_8)) {
    // ...
}

Overall

I felt that this was a pretty good exercise to perform, not just because it'll be immediately useful for NSF ODP, but also because it's a good reminder to be more diligent about character encoding. And it's also just a good lesson for two critical parts of programming: take the higher abstraction when you can and be as explicit as possible in your intent.

By switching to using the ZIP filesystem implementation, I was able to remove an entire step and problem domain from my plate. Now, the code that reads and writes filenames server-side should be able to run on basically any locale setting, without concern for the restrictions of the filesystem (within reason). The code is simpler, the operations are the same whether it's working with the filesystem directly or not, and reading the ZIP'ed ODP should actually be slightly more efficient.

And for the rest, explicitly picking your character set is just good practice. Even in a case where the documentation says that it will default to UTF-8, I think it's better to do it this way, so anyone reading your code can see what you're doing without resting on implied behavior. Certainly, you can be too explicit in places where relying on natural behavior makes sense, but this highlighted that character sets aren't one of those cases.

fontconfig, Java, and Domino 11

Jan 29, 2021, 12:09 PM

Tags: java
  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout

In my last post, I quickly mentioned some trouble I had run into with fontconfig and Poi, in the context of configuring a Docker-based Domino server. However, I think it deserves its own post, so I have something to point to if others run into the same trouble down the line.

The Upshot

The upshot of the issue is that, if you're going to use Poi or or other graphics-adjacent Java libraries in Domino 11 on Linux, you'll need fontconfig and potentially some other support files installed on your system. If you have any GUI stuff installed, they'll probably be there, but it's common for them to be missing on headless servers.

For the official Domino Docker image, which uses Red Hat's package system, I wrote this Dockerfile for my derivative version:

FROM domino-docker:V1101FP2_10202020prod
USER root
RUN yum install --assumeyes fontconfig urw-fonts
USER notes

On Debian-based systems, I believe you just need apt install fontconfig.

The Details

AdoptOpenJDK builds of Java apparently don't include the same font-related support files that the Oracle ones did, and that results in calls made to the AWT layer to throw NullPointerExceptions at various times related to getting font information. This has shown up in a couple issues over in the openjdk-support project on GitHub, with two representative ones being:

https://github.com/AdoptOpenJDK/openjdk-support/issues/80

java.lang.NullPointerException
  at sun.awt.FcFontManager.getDefaultPlatformFont(FcFontManager.java:76)
  at sun.font.SunFontManager$2.run(SunFontManager.java:433)
  ...

https://github.com/AdoptOpenJDK/openjdk-build/issues/693

java.lang.NullPointerException
  at sun.awt.FontConfiguration.getVersion(FontConfiguration.java:1264)
  at sun.awt.FontConfiguration.readFontConfigFile(FontConfiguration.java:219)
  at sun.awt.FontConfiguration.init(FontConfiguration.java:107)
  ...

Domino 11 switched from IBM's proprietary variant of J9 to OpenJ9, and this is another one of the little fiddly details that isn't quite the same between the two.

Most commonly, I've found this crop up when using Poi, specifically calling autosizeColumns when generating a spreadsheet, but in theory any number of things like that will trip across this. Unfortunately, the internal JVM classes aren't terribly helpful in their error reporting, since they get several method calls in just assuming that all is well with the world before bailing with the NPEs like above.

It's a mild annoyance to deal with, but fortunately one with a straightforward fix, at least once you know what the trouble is.

Getting Started with Hotwire in a Java Webapp

Jan 12, 2021, 5:19 PM

Whenever I have a great deal of discretion over how a web app is made these days, I like to push to see how simple I can make the front end portion. I spend some of my client time writing heavy client-JS front ends in React and Angular and what-have-you, and, though I get why they are good, I kind of hate them all.

One of the manifestations of my desires has been this very blog, where I set out to try not only some interesting current tools on the Java side, but also challenged myself heavily to use little to no JavaScript. On that front, I was tremendously successful - and, in fact, the only JavaScript on here is the Turbolinks library, which intercepts same-app links and updates the changed parts inline, without the server knowing about the "partial refresh" going on.

Since then, Turbolinks merged with its cousin Stimulus and apotheosized into Hotwire, which is somewhere in between a JavaScript framework and a manifesto. Specifically, it's a manifesto to my liking, so I've been champing at the bit to use it more.

Hotwire Overview

The "Hotwire" name is a cheeky truncation of HTML-over-the-wire, which itself is a neologism for how the web has historically worked: your server sends HTML, and then your browser does stuff with that. It "needs" a new name to set it apart from full-JS apps, which amount to basically sending an application to the browser, having it initialize the app, and then having the app do what would otherwise be the server's job by way of shuttling JSON around.

Turbo is that part that subsumed Turbolinks, and it focuses on enhancing existing HTML and providing a few web components to bring single-page-application niceties to server-rendered apps. The "Drive" part is Turbolinks, so that was familiar to me. What interested me next was Turbo Frames.

Turbo Frames

If you've ever used the XPages Dojo Tab Container's partialRefresh property before, Turbo Frames will be familiar. There are two main ways you can go about using it: making a "frame" that contains some navigable content (say, a form) that will then refresh in-place or making a lazy-loaded frame that pulls from another URL. The latter is what interested me now, and is what carries similar benefits to the Tab Container. It lets you serve the main page and then defer complex complication of an inner part without having to write your own JavaScript to do an API call or otherwise populate the section.

In my case, I wanted to do something very similar to the example. I have my main page, then a sidebar that can be potentially complicated to generate. So, I set up a Turbo Frame using this bit of JSP:

1
<turbo-frame id="links" src="${pageContext.request.contextPath}/links"></turbo-frame>

The only difference from the example, really, is the bit of EL in ${...}, which just makes sure that the final URL adapts to wherever the app is hosted.

The "links" resource there is another MVC controller that renders a different JSP page, truncated like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
<html>
    <head>
        <script type="text/javascript" src="${pageContext.request.contextPath}/webjars/hotwired__turbo/7.0.0-beta.2/dist/turbo.es5-umd.js"></script>
    </head>
    <body>
        <turbo-frame id="links">
            <!-- expensive content here -->
        </turbo-frame>
    </body>
</html>

The <turbo-frame id="links"> on the initiating page matches up with the one in the embedded page to figure out what to extract and render.

One little side note here is my use of WebJars to bring in Turbo. This isn't an NPM-based project, so there's no package.json bringing the dependency in, but I also didn't want to just paste the JS into my project. Fortunately, WebJars does yeoman's work: it makes various JS libraries available in Servlet-friendly Java JAR format, giving you a JAR with the JS from whatever the library is in META-INF/resources. In turn, an at-least-reasonably-modern servlet container will serve files up from there as if they're part of your main app. That way, you can just use a Maven dependency and not have to worry.

A Hitch: 406 Not Acceptable

Edit 2021-01-13: Thanks to a new release of Turbo, this workaround is no longer needed.

When I first put this together, I saw that Turbo was doing its job of fetching from the remote URL, but it was getting a 406 Not Acceptable response from the server. It took me a minute to figure out why - the URL was correct, it was just a normal GET request, and nothing immediately stood out as a problem in the headers.

It turned out that the trouble was in the Accept header. To work with other Turbo components, Frames makes a request with a header like Accept: text/html; turbo-stream, text/html, application/xhtml+xml. That first one - text/html; turbo-stream - is problematic. I'm not sure if it's the presence of a qualifier at all on text/html, the space, or the lack of an = (as in text/html;charset=UTF-8), but Liberty didn't like it.

Since I'm not (yet, at least) using Turbo Streams, I decided to filter this out on the server. Since MVC is built on JAX-RS, I wrote a JAX-RS request filter to find any Accept values of this type and strip them out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
@Provider
@PreMatching
public class TurboStreamAcceptFilter implements ContainerRequestFilter {
    @Override
    public void filter(ContainerRequestContext requestContext) throws IOException {
        MultivaluedMap<String, String> headers = requestContext.getHeaders();
        if(headers.containsKey(HttpHeaders.ACCEPT)) {
            List<String> cleaned = headers.get(HttpHeaders.ACCEPT).stream()
                .map(accept -> {
                    String[] vals = accept.split(",\\s*"); //$NON-NLS-1$
                    List<String> localClean = Arrays.stream(vals)
                        .filter(val -> val.indexOf(';') < 0)
                        .collect(Collectors.toList());
                    return String.join(", ", localClean); //$NON-NLS-1$
                })
                .collect(Collectors.toList());
            headers.put(HttpHeaders.ACCEPT, cleaned);
        }
    }
}

Since those filters happen before almost anything else, this cleared up the trouble.

Summary

Setting the Accept quirk aside, this was a pleasant success, and I look forward to using this more. I've found the modern Java stack of JAX-RS + CDI + MVC + simple JSP to be a delight, and Hotwire slots perfectly-smoothly into it. I still quire enjoy rendering HTML on the server and the associated perk of not having to duplicate business logic on both sides. Next time I have an app that requires a bit of actual JavaScript, I'll likely throw Stimulus into the mix here.

The Difficulties of Domino Project Dependencies

Dec 31, 2020, 9:38 AM

Tags: java maven

This post is a drum I've been banging for a long time, from nagging the dev team in the IBM days through to formally requesting it in HCL's Ideas Portal. That idea there has been "Likely to implement" for a little while now, which is heartening, and either way I figured it'd be useful to have a proper blog post explaining the trouble and what a useful better version would be.

The Core Trouble

The main thing I'm talking about here is the act of having a third-party or (particularly) open-source project that depends on Domino artifacts - namely, Notes.jar, the NAPI, and the XPages UI components. I have more than a few such projects, so it's something I deal with pretty much daily.

When you're dealing with an XPages app in an NSF, this isn't really an issue: all the parts you need are there and are part of the classpath. You just reference lotus.domino.Database or com.ibm.xsp.extlib.util.ExtLibUtil and don't even give it a second thought. When you have a project outside of an NSF or Designer, though, you start to have to worry about this.

OSGi Projects

For OSGi-based projects, this means that you need to have a Target Platform that points to the XPages artifacts and then either have a variant of that that includes a packages Notes.jar or also include Notes.jar in your classpath another way. In Eclipse, this might be accomplished by adding Notes.jar to your active JVM and referencing a Notes or Domino installation's OSGi directories - this is something the XPages SDK helps with.

The immediate trouble this involves is if you want to build this project outside of Eclipse - most commonly now with Maven. This is where the IBM Domino Update Site for Build Management came in, which is a cleanly-packaged p2 update site of the XPages artifacts and Notes.jar, suitable for use with Maven+Tycho and any other tool (like Eclipse) that gets its dependencies out of a p2 repository. Unfortunately, it hasn't been updated since its initial release, and contains just the original 9.0.1 versions.

To aid with creating updated versions of that, I created the generate-domino-update-site tool a while back. Since no one outside HCL can legally share update sites themselves, the tool is the next-best thing: point it at Notes or Domino and it'll make one for you in a consistent way.

With either of those routes, though, there's still a gotcha: you still need to have each developer set up the update site for themselves, and it's only consistent across projects because the community settled on the notes-platform Maven property as a URI pointing to the update site. This is as opposed to something like Eclipse-the-IDE's repositories, which (as a virtue of being open-source) are publicly available and can be referenced freely.

Overall, it's a drag having to bring-your-own-site, but at least the use of notes-platform as a pseudo-standard smooths it out.

Non-OSGi Projects

Things get stickier with non-OSGi projects, though. With OSGi projects, the dependency mechanism lines up with the way the artifacts are delivered from the vendor: they all have OSGi metadata (or have a ready-made hook for it, like Notes.jar) and so just making a p2 site out of them makes them ready to go. They don't, though, have Maven metadata, and so referencing them that way takes extra processing.

I've gone about this two ways to date:

  • The aforementioned update site project also has a mechanism for "Mavenizing" update sites. You point the tool at an existing p2 site (like one created by the first step), pick a groupId for it, and it'll install the files into your local repository.
  • The P2 Maven Resolver plugin, which cuts out that middle step and uses a p2 repository as a source of Maven dependencies directly. This route is more "clever", but some tools get a little shaky with it.

Either way, the experience is okay but not perfect. There are some oddities to do with the different dependency mechanisms between OSGi and Maven, but overall it gets the job done.

The core trouble with it is that it's even less consistent across developers/projects than the Tycho notes-platform idiom. I've personally gone through a couple iterations of the Mavenized layout, with different inter-dependency schemes and groupIds. That leads to drift and incompatibility among projects. For example, I use the xpages-runtime project for client work to do my lingering XPages development, and there's some friction in keeping the dependency schemes between that and the client project in line, even though I'm the only developer.

What I'd Like

What I'd really like would be an official HCL-provided or -sanctioned repository for p2 and Maven use for these artifacts. I've pitched the idea of OpenNTF hosting this, since I already have the tools and servers on hand, though we'd have to come up with a way to agree about who is legally allowed to access it. All the better would be consistently-updated HCL-hosted repositories, where they could link access to one of the various HCL accounts we tend to have.

The best route would be to publish it on a repository that doesn't require authentication. While I'm making wishes, attaching Javadoc would be a classy touch too.

Anyway, that's the gist of it. It's one of the two main thorns in my side when doing Domino-targeted development (the other being initializing the runtime itself in the process), and it'd save me a whole lot of heartache if it had a proper solution.

Quick Tip: JDK Null Annotations for Eclipse

Dec 10, 2020, 3:17 PM

  1. The Cleansing Flame of Null Analysis
  2. Quick Tip: JDK Null Annotations for Eclipse
  3. The Joyful Utility of Optionals in Java

A few years back, I more-or-less found the religion of null analysis, and I've stuck with it with at least my larger projects.

One of the sticking points all along, though, has been Eclipse's lack of knowledge about what code not annotated with nullness annotations does, with the biggest blind spot being the JDK itself. For example, take this bit of code:

1
BigDecimal foo = BigDecimal.valueOf(10).add(1);

That will never throw a NullPointerException, but, since BigDecimal#valueOf isn't annotated at all, Eclipse doesn't know that for sure, and so it may flag it as a potential problem. To deal with this, Eclipse has the concept of external annotations, where you can associate a specially-formatted file with a set of classes and Eclipse will act as if those classes had nullness annotations already.

Core JDK Annotations

Unfortunately - and as opposed to things like IntelliJ - Eclipse for some reason doesn't ship with this knowledge out of the box. For a while, I just dealt with it, throwing in technically-unnecessary checks around things like Optional#get that are guaranteed to return non-null. The other day, though, I decided to look into it more and found lastNPE.org, which is a community-driven project to provide such external annotations.

Better still, they also provide an Eclipse plugin (404 expected on that link - Eclipse knows what to do with it) to apply rules from your project's Maven configuration to the IDE. This not only covers applying external annotations, but also synchronizing compiler configurations.

Sidebar: The Eclipse Compiler

By default, a Java project is compiled with javac, the stock Java compiler. Eclipse maintains its own compiler, varyingly called ECJ or (as shorthand) JDT. Eclipse's compiler is, unsurprisingly, well-geared towards IDE use, and part of that is that it can flag and process a great deal of semantic and stylistic issues that the stock compiler doesn't care about. This included null annotations.

Maven Configuration

With this information in hand, I went to configure my project's Maven build. The first step was to change it to use Eclipse's compiler, since I had recently switched the project away from being Tycho-based (which uses ECJ by default). This can be done by configuring maven-compiler-plugin:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>3.8.1</version>
      <configuration>
        <failOnWarning>false</failOnWarning>
        <compilerId>jdt</compilerId>
        <compilerArguments>
          <properties>${project.basedir}/../../config/org.eclipse.jdt.core.prefs</properties>
        </compilerArguments>
      </configuration>
      <dependencies>
        <dependency>
          <groupId>org.eclipse.tycho</groupId>
          <artifactId>tycho-compiler-jdt</artifactId>
          <version>1.7.0</version>
        </dependency>
      </dependencies>
    </plugin>
  <plugins>
<build>

I use an inline dependency on Tycho's tycho-compiler-jdt to provide the compiler. I stuck with version 1.7.0 for now because Tycho 2.0+ uses the newer core runtime that requires Java 11, which this project does not yet for platform lag reasons. I also find it useful to set <failOnWarning>false</failOnWarning> here because ECJ throws many more (legitimate) warnings than javac. Long-term, it's cleanest to keep this enabled.

I also configured Eclipse's compiler settings like I wanted for one of the project's modules, then copied the settings file to a common location. That's where the compilerArguments bit comes from.

Then, I went through the available libraries from lastNPE.org, found the ones that match the libraries we use, and added them as dependencies in my root project:

<properties>
  <lastnpe-version>2.2.1</lastnpe-version>
</properties>

<dependencies>
  <dependency>
    <groupId>org.lastnpe.eea</groupId>
    <artifactId>jdk-eea</artifactId>
    <version>${lastnpe-version}</version>
    <scope>provided</scope>
  </dependency>
  <!-- and so on -->
</dependencies>

Once I updated the project configurations, Eclipse churned for a while, and then I got to work cleaning up the giant pile of new errors and warnings it brought up. As usual with null checks, this was a mix of "oh, nice catch" and "okay, sure, technically, but come on". For example, it flags System.out.println as a potential NPE because System.out is assignable - this is true, but realistically my app's code is going to be the least of my concerns when System.out is set to null.

In any event, I was pleased as punch to find this. Now, I have a way to not only properly check nullness with core classes and common libraries, but it's a way that's shared among the whole project team automatically and enforced at compile time.

Java Discontinuities in Practice

Nov 23, 2020, 11:04 AM

Tags: java

Earlier this year, I wrote a post about the lay of the Java land, and in it I mentioned the oddities of post-8 Java releases as well as the then-oncoming namespace conversion in Jakarta EE. Those changes are a bit more "real" now, so I think it's worth taking the opportunity to expand on them and how they relate to Java with Domino.

Jakarta EE 9

With Jakarta EE 9 officially out now, I think it's all the more important to keep an eye on what these changes are. For Jakarta's part, there's a convenient post up on Eclipse's site detailing the specifics of what's going on, and most of what I say here is really just going to rehash that.

The "namespace conversion" in question is the switch from javax.* to jakarta.* for EE-related packages like Servlet, due to Oracle not granting rights to the "javax" term. This has involved a lot of fiddly work internally for the Jakarta project as a whole, and all of the included specs have received a major-version bump to reflect the break. In general, these new versions are functionally equivalent to the previous release, but use the different package names - so Servlet 5 has the same capabilities as 4, JSF 3 as 2.3, and on down the line.

A bit of a quirk in this is that not all classes in the javax.* namespace will be moving to jakarta.*, because not everything in there was part of Java EE. For example, Swing is in javax.swing, but it's not going anywhere. It gets fiddlier, too, especially when it comes to XML. The JDK traditionally (more on that in a bit) contained a couple distinct technologies wrapped up under the javax.xml package space, but some of those are actually part of Java EE and make the transition to Jakarta. For example, the javax.xml.transform package (covering XSLT) is part of what was originally termed "Java API for XML Processing", or "JAX-P", and is still part of the Java SE core. The javax.xml.bind package (covering mapping between XML and Java objects) was part of the "Java Architecture for XML Binding" API, or "JAX-B", and is not part of Java SE anymore. It's now "Jakarta XML Binding" and is receiving a package change to jakarta.xml.bind. I think it's cases like these that will be hairy for a lot of people not doing full Jakarta EE 9 work.

For the most part, this won't have an effect on Java development on Domino for a good while. Domino has never tracked changes in the EE world - XPages was a partial fork of Java EE 5 and that's been about it. I think that the ways it will affect Domino development (other than if you just outright do Jakarta EE development, which you should) is that code examples and third-party libraries are going to gradually transition over to the new namespaces, making them incompatible with code in the Domino stack. This will certainly affect things like my XPages Jakarta EE Support project, where future versions of the implementation components won't be usable directly if they use the Servlet spec, even if they don't require Servlet 3+ functionally.

So I think it's worth being aware of what's going on, even if there's not (yet) anything you need to do about it. The same applies to the changes in the core Java runtime itself.

Java 11 and Beyond

After 8, Java switched to a peculiar numbering system, where new major-version-numbered releases come out every six months, but only the ones that come out every three years are Long-Term-Service releases. As of right now, the current version of Java is 15, but 11 is the active LTS one, and so 11 is effectively the "real" current version for concerns like platform vendors. Java 8 is now in the same spot that Java 6 was for a while, where it's been the baseline expectation for a long time, and it's a slog of a process to move the full community past it.

Still, Java 11 is certainly hitting critical mass now. Eclipse-the-IDE started requiring it in the 2020-09 release, and the various app servers have either supported it for a while or are on the cusp of doing so.

There are a lot of nice things added to the language in the releases past 8, but they've also gotten more aggressive about removing things from the core Java SE runtime, and those changes are the things likely to be immediately noticeable Domino-wise. As I mentioned above, JAX-B was always technically an EE specification, but it was shipped with Java SE for a good long time. As of Java 11, though, it's gone, and instead must be either provided by the app server or brought in as an explicit dependency. The same goes for some less-important packages, such as org.omg - though that package sounds fun, it stands for "Object Management Group" and it just included some classes used for CORBA.

I imagine that few Domino developers use JAX-B or CORBA directly, but our old nemesis Notes.jar sure does! If you're doing any project builds outside of Domino that make use of the Notes.jar API, you likely already have or will soon run into this. For Tycho, I made a patch fragment that provides the required API to the com.ibm.notes.java.api bundle a good while back. For non-Tycho projects, your best bet is generally to include a dependency on the GlassFish-packaged variant and a pre-3.0 version of the Jakarta XML Bind API.

There will be some further removals down the line, like RMI Activation, but I don't think any currently on the horizon will be as pertinent as those.

Java With Domino Roundtable Recording

Nov 17, 2020, 4:37 PM

Tags: java

I hosted my "Java With Domino" roundtable earlier today, and I think it went pretty well! We ended up having just about the ideal number of participants, and it was not only great hearing how people feel on the topic, but also seeing and hearing from everyone.

I've put the video up on YouTube:

I'm thinking of doing more of these, and kind of making them a looser, more-casual companion to OpenNTF's webinar series. I don't know whether they'd all be on similar topics or what, but it seems like it'll be worth continuing.

Upcoming Event: Java With Domino Roundtable

Nov 12, 2020, 3:31 PM

Tags: java

The other day, I floated the idea of running an unstructured roundtable discussion of working with Java either on or accessing Domino, and I think it'll be worth giving a shot.

Since Java with Domino is in a weird place, the goal would be to discuss the various ways that people are or want to use it. So that can include XPages, OSGi, REST services generally, Jakarta EE, Spring, Vert.x, and so forth. I'd also like it to be open generally. I imagine I'll have some preliminary remarks, but otherwise the goal is to be less like a webinar and more like a free-flowing discussion, in the vein of the "happy hour" and "coffee break" rooms from CollabSphere and Digital Week.

My current plan is to run it on short notice, next week:

Tuesday, November 17th
2:00 PM US Eastern (19:00 UTC)
https://zoom.us/j/99514285138
Password: Computers!

I'll share the password I come up with on Twitter on the day of the event, so look for it there.

Weekend Domino-Apps-in-Docker Experimentation

Jun 28, 2020, 6:37 PM

  1. Weekend Domino-Apps-in-Docker Experimentation
  2. Executing a Complicated OSGi-NSF-Surefire-NPM Build With Docker
  3. Getting to Appreciate the Idioms of Docker

For a couple of years now, first IBM and then HCL have worked on and adapted community work to get Domino running in Docker. I've observed this for a while, but haven't had a particular need: while it's nice and all to be able to spin up a Domino server in Docker, it's primarily an "admin" thing. I have my suite of development Domino servers in VMs, and they're chugging along fine.

However, a thought has always gnawed at the back of my mind: a big pitch of Docker is that it makes not just deployment consistent, but also development, taking away a chunk of the hassle of setting up all sorts of associated tools around development. It's never been difficult, per se, to install a Postgres server, but it's all the better to be able to just say that your app expects to have one around and let the tooling handle the specifics for you. Domino isn't quite as Docker-friendly as Postgres or other tools, but the work done to get the official image going with 11.0.1 brought it closer to practicality. This weekend, I figured I'd give it a shot.

The Problem

It's worth taking a moment to explain why it'd be worth bothering with this sort of setup at all. The core trouble is that running an app with a Notes runtime is extremely annoying. You have to make sure that you're pointing at the right libraries, they're all in the right place to be available in their internal dependency tree, you have to set a bunch of environment variables, and you have to make sure that you provide specialized contextual info, like an ID file. You actually have the easiest time on Windows, though it's still a bit of a hurdle. Linux and macOS have their own impediments, though, some of which can be showstoppers for certain tasks. They're impediments worth overcoming to avoid having to use Windows, but they're impediments nonetheless.

The Setup

But back to Docker.

For a little while now, the Eclipse Marketplace has had a prominent spot for Codewind, an IBM-led Eclipse Foundation project to improve the experience of development with Docker containers. The project supplies plugins for Eclipse, IntelliJ, and VS Code / Eclipse Che, but I still spend most of my time in Eclipse, so I went with the former.

To begin with, I started with the default "Open Liberty" project you get when you create a new project with the tooling. As I looked at it, I realized with a bit of relief that there's not too much special about the project itself: it's a normal Maven project with war packaging that brings in some common dependencies. There's no Maven build step that expects Docker at all. The specialized behavior comes (unsurprisingly, if you use Docker already) in the Dockerfile, which goes through the process of building the app, extracting the important build results into a container based on the open-liberty runtime image, bringing in support files from the project, and launching Liberty. Nothing crazy, and the vast majority of the code more shows off MicroProfile features than anything about Docker specifically.

Bringing in Domino

The Docker image that HCL provides is a fully-fledged server, but I don't really care about that: all I really need is the sweet, sweet libnotes.so and associated support libraries. Still, the easiest way to accomplish that is to just copy in the whole /opt/hcl/domino/notes/11000100/linux directory. It's a little wasteful, and I plan to find just what's needed later, but it works to do that.

Once you have that, you need to do the "user side" of it: the ID file and configuration. With a fully-installed Domino server, the data directory balloons in side rapidly, but you don't actually need the vast majority of it if you just want to use the runtime. In fact, all you really need is an ID file, a notes.ini, and a names.nsf - and the latter two can even be massively trimmed down. They do need to be custom for your environment, unfortunately, but at least it's much easier to provide just a few files than spin up and maintain a whole server or run the Notes client locally.

Then, after you've extracted the juicy innards of the Domino image and provided your local resources, you can call NotesInitExtended pointing to your data directory (/local/notesdata in the HCL Docker image convention) and the notes.ini, and voila: you have a running app that can make local and remote Notes native API calls.

Example Project

I uploaded a tiny project to demonstrate this to GitHub: https://github.com/jesse-gallagher/domino-docker-war-example. All it does is provide one JAX-RS resource that emits the server ID, but that shows the Notes API working. In this case, I used the Darwino Domino NAPI (which I really need to refresh from upstream), but Domino JNA would also work. Notes.jar would too, but I think you'll need one of those projects to do the NotesInitExtended call with arguments.

The Dockerfile for the project goes through the steps enumerated above, based on how the original example image does it, and was tweaked to bring in the Domino runtime and support files. I stripped the Liberty-specific stuff out of the pom.xml - I think that the original route the example did of packaging up the whole server and then pulling it apart in Docker image creation has its uses, but isn't needed here.

Much like the pom.xml, the code itself is slim and doesn't explicitly refer to Docker at all. I have a ServletContextListener to init and term the Notes runtime, as well as a Filter implementation to init/term the request thread, but otherwise it just calls the Notes API with no fuss.

Larger Projects

I haven't yet tried this with larger projects, but there's no reason it shouldn't work. The build-deploy-run cycle takes a bit more time with Docker than with just a Liberty server embedded in Eclipse normally, but the consistency may be worth it. I've gotten used to running a killall -KILL java whenever an errant process gloms on to my Notes ID file and causes the server to stop being able to init the runtime, but I'd be glad to be done with that forever. And, for my largest project - the one with the hundreds of XPages and CCs - I don't see why that wouldn't work here too.

Normal Domino Projects

Another route that I've considered for Domino in Docker is to use it to deploy NSFs and OSGi projects. This would involve using the Domino image for its intended purpose of running a full server, but configuring the INI to just serve HTTP, and having the Dockerfile place the built OSGi plugins and NSFs in their right places. This would certainly be much faster than the build-deploy-run cycle of replacing NSF designs and deploying the plugins to an Update Site NSF, though there would be a few hurdles to get over. Not impossible, though.


I figure I'll kick the tires on this some more this week - maybe try deploying the aforementioned giant XPages .war project to it - to see if it will fit into my workflow. There's a chance that the increased deployment times won't be worth it, and I won't really gain the "consistent with production" advantages of Docker when the way I'm developing the app is already a wildly-unsupported configuration. It might be worth it if I try the remote mode of Codewind, though: I have some Liberty servers that Jenkins deploys to, but it'd be even-better to be able to show my running app to co-developers to work on something immediately, instead of waiting for the full build. It's worth some investigation, anyway.

Managed Beans to CDI

Jun 19, 2020, 1:50 PM

  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI
  4. The Myriad Idioms For Finding Implementations In Java

When I was getting familiar with modern Java server development, one of the biggest conceptual stumbling blocks by far was CDI. Part of the trouble was that I kind of jumped in the deep end, by way of JNoSQL's examples. JNoSQL is a CDI citizen through and through, and so the docs would just toss out things like how you "create a repository" by just making an interface with no implementation.

Moreover, CDI has a bit of the "Maven" problem, where, once you do the work of getting familiar with it, the parts that are completely baffling to newcomers become more and more difficult to remember as being unusual.

Fortunately, like how coming to Maven by way of Tycho OSGi projects is "hard mode", coming to CDI by way of a toolkit that uses auto-created proxy objects is a more difficult path than necessary. Even better, XPages developers have a clean segue into it: managed beans.

JSF Managed Beans

XPages inherited the original JSF concept of managed beans, where you put definitions for your beans in faces-config.xml like so:

1
2
3
4
5
6
7
8
9
<managed-bean>
	<managed-bean-name>someBean</managed-bean-name>
	<managed-bean-class>com.example.SomeBeanClass</managed-bean-class>
	<managed-bean-scope>application</managed-bean-scope>
	<managed-property>
		<property-name>database</property-name>
		<value>#{database}</value>
	</managed-property>
</managed-bean>

Though the syntax isn't Faces-specific, the fact that it is defined in faces-config.xml demonstrates what a JSF-ism it is. Newer versions of JSF (not XPages) let you declare your beans inline in the class, skipping the XML part:

1
2
3
4
5
6
7
8
package com.example;
// ...
@ManagedBean(name="someBean")
@ApplicationScoped
public class SomeBeanClass {
	@ManagedProperty(value="#{database}")
	private Database someProp;
}

These annotations were initially within the javax.faces package, highlighting that, while they're a new developer convenience, it's still basically the same JSF-specific thing.

While all this was going on (and before it, really), the Enterprise JavaBeans (EJB) spec was chugging along, serving some similar concepts but it really is kind of its own, all-consuming beast. I won't talk about it much here, in large part because I've never used it, but it has an important part in this history, especially when we get to the "dependency injection" parts.

Move to CDI

Since it turns out that managed beans are a terrifically-useful concept beyond just JSF, Java EE siphoned concepts from JSF and EJB to make the obtusely named Contexts and Dependency Injection spec, or CDI. CDI is paired with some associated specs like Common Annotations and Inject to make a new bean system. With a switch to CDI, the bean above can be tweaked to something like:

1
2
3
4
5
6
7
8
package com.example;
// ...
@Named(name="someBean")
@ApplicationScoped
public class SomeBeanClass {
	@Inject @Named("database")
	private Database someProp;
}

Not wildly different - some same-named annotations in a different package, and some semantic switches, but the same basic idea. The difference here is that this is entirely divorced from JSF, and indeed from web apps in general. CDI specifically has a mode that works outside of a JEE/Servlet container and could work in e.g. a command-line program.

Newer versions of JSF (and other UI engines) deprecated their own version of this to allow for CDI to be the consistent pool of variable resolution and creation for the UI and for the business logic.

The Conceptual Leap

One of the things blocking me from properly grasping CDI at first was that @Inject annotation on a property. If it's just some Java object, how would that property ever be set? Certainly, CDI couldn't be so magical that I could just do new SomeBeanClass() and have someProp populated, right? Well, yes, that's right. No matter how gussied up your class definition is with CDI annotations, constructing an instance with new will pay no attention to any of it.

What got me over the hurdle is realizing that, in a modern web app in particular, almost everything you do runs through CDI. JSP request? That can resolve CDI. JAX-RS resource? That's managed by CDI. Filters? CDI. And, because those objects are all being instantiated by CDI, the CDI runtime can do whatever the heck it wants with them. That's why the managed property in the original example is so critical: it's the same idea, just managed by the JSF runtime instead of CDI.

That's how you can get to a class like the controller that manages the posts in this blog. It's annotated with all sorts of stuff: the JAX-RS @Path, the MVC spec @Controller, the CDI @RequestScoped, and, importantly, the @Inject'ed properties. Because the JAX-RS environment instantiates its resource classes through CDI in a JEE container, those will be populated from various sources. HttpServletRequest comes from the servlet environment itself, CommentRepository comes from JNoSQL as based on an interface in my non-JEE project (more on that in a bit), and UserInfoBean is a by-the-numbers managed bean in the CDI style.

There's certainly more indirect "magic" going on here than in the faces-config.xml starting point, but it's a clear line from there to here.

The Weird Stuff

CDI covers more ground, though, and this is the sort of thing that tripped me up when I saw the JNoSQL examples. Among CDI's toolset is the creation of "proxy" objects, which are dynamic objects that intercept normal method calls with new behavior. This is a language-level Java feature that I didn't even know this was a thing in this way, but it's been there since 1.3.

Dynamic scripting languages do this sort of thing as their bread and butter. In Ruby, you can define method_missing to be called when code calls a method that wasn't already defined, and that can respond however you'd like. Years ago, I used this to let you do doc.foo to get a document item value, for example. In Java, you get a mildly-less-loosey-goosey version of this kind of behavior with a proxy's InvocationHandler.

CDI does this extensively, even when you might think it's not. With CDI, all instances are dynamic proxy objects, which allows it to not only inject field values, but also add wrapper code around method calls. This allows tools like MicroProfile Metrics to do things like count invocations, measure timings, and so forth without requiring explicit code beyond the annotations.

And then there are the whole-cloth new objects, like the JNoSQL repositories. To take one of the examples from jnosql.org, here's a full definition of a JNoSQL repository as far as the app developer is concerned:

1
2
3
4
5
6
public interface PersonRepository extends Repository<Person, Long> {

  List<Person> findByName(String name);

  Stream<Person> findByPhones(String phone);
}

Without knowledge of CDI, this is absolute madness. How could it possibly work? There's no code! The trick to it is that CDI ends up creating a dynamic proxy implementation of the interface, which is in turn backed by an InvocationHandler instance. That instance receives the incoming method call as a string and array of parameters, parses the method to look for a concept it handles, and either generates a result or throws an exception. Once you see the capabilities the stack has, the process to get from a JAX-RS class using @Inject PersonRepository foo to having that actually work makes more sense:

  • The JAX-RS servlet receives a request for the resource
  • It asks the CDI environment to create a new instance of the resource class
  • CDI runs through the fields and methods of the class to look for annotations it can handle, where it finds @Inject
  • It looks through its contributed extensions and finds JNoSQL's ServiceLoader-provided extension
  • One of the beans from that extension can handle creating Repository instances
  • That bean creates a proxy object, which handles method calls via invoke

Still pretty weird, but at least there's a path to understanding.

The Overall Importance

The more I use modern JEE, the more I see CDI as the backbone of the whole development experience. It's even to the point where it feels unsafe to not have it present, managing objects, like everything is held together by shoestring. And its importance is further driven home by just how many specs depend on it. In addition to many existing technologies either switching to or otherwise supporting it, like JSF above, pretty much any new Jakarta EE or MicroProfile technology at least has it as the primary mechanism of interaction. Its importance can't be overstated, and it's worth taking some time either building an app with it or at least seeing some tutorials of it in action.

The RuntimeEnvironment Idiom

Jun 18, 2020, 9:16 AM

Tags: java xpages
  1. XPages: The UI Toolkit and the App Framework
  2. The RuntimeEnvironment Idiom
  3. NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

One of the specific problems that we encountered with my aforementioned client app first when expanding it to include REST services and then later to be portable outside an NSF entirely is dealing with varying mechanisms for interacting with the surrounding environment.

The Problem to Solve

The immediate way this distinction comes up when adding JAX-RS services or other OSGi servlets is trying to get a handle on the current Domino user session or context database. In an XPages app (including in code called in a plugin-based library), you can just do:

1
Session s = ExtLibUtil.getCurrentSession();

However, this will return null if called while processing an OSGi servlet. Instead, servlet code should call:

1
Session s = ContextInfo.getUserSession();

Same idea - they both return a session based on the current authenticated user from the HTTP stack - but they have different backing implementations. So my first pass was to coordinate these inside an AppUtil class in a method like this:

1
2
3
4
5
6
7
public static Session getSession() {
	if(FacesContext.getCurrentInstance() != null) {
		return ExtLibUtil.getCurrentSession();
	} else {
		return ContextInfo.getUserSession();
	}
}

This worked pretty well, until I added Tycho-based compile-time unit tests, which is an OSGi environment where neither of those paths would return a session. So I had to add a fallback that would just eventually spawn a new NotesFactory.createSession() if it couldn't find another one.

It's one thing for a getSession() method to balloon in logic, but Notes runtime access isn't the only problem like this. Take the case of validating model objects as part of the "save" process. In an XPages environment, validation errors should be reported as FacesMessages on the view root or, ideally, attached directly to the form control that represents the invalid field. In a REST service, though, the ConstraintViolationException should bubble right up to the top and be returned as an appropriately-formatted JSON object with a corresponding HTTP status code. Originally, we handled this similarly: we moved the FacesMessage stuff out of the model objects and into the AppUtil class and handled it with an if tree.

The RuntimeEnvironment Class

Eventually, though, there was enough customizable behavior that these branching methods in one class got out of hand, and that's even before getting into cases where a class (like FacesContext) may not even be available at runtime at all. So I implemented a RuntimeEnvironment class as a service. It started out like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public interface RuntimeEnvironment {
	static final List<RuntimeEnvironment> knownEnvironments = AppUtil.findExtensions(RuntimeEnvironment.class).stream()
			.sorted((a, b) -> Integer.compare(b.getWeight(), a.getWeight()))
			.collect(Collectors.toList());

	public static RuntimeEnvironment current() {
		return knownEnvironments.stream()
			.filter(RuntimeEnvironment::isCurrent)
			.findFirst()
			.orElseGet(UnknownEnvironment::new);
	}

	boolean isCurrent();
	int getWeight();
}

The AppUtil.findExtensions method is a simplified wrapper around the IBM Commons ExtensionManager call to find services in a type-safe way.

This allows me to define a series of RuntimeEnvironment implementations that may or may not be included in a given packaging of the app, while the isCurrent() and getWeight() methods allow me to distinguish between multiple valid environments to find the most specific. To get an idea of what I mean, here is the current suite of environment implementations:

RuntimeEnvironment type hierarchy

These run a wide gamut. XPagesEnvironment and OSGiServletEnvironment are the big ones that kicked it off, but TychoEnvironment is there to handle compile-time tests, while NotesEnvironment lets the same code work in some utilities we launch from within the Notes client - and SWTRuntimeEnvironment allows those same tools to run outside of OSGi.

Once I broke ground on these classes, the number of situations where they're useful began to become obvious. Take, for example, resolving a variable. The on-Domino XPages implementation looks like what you'd expect:

1
2
3
4
5
@Override
public <T> @Nullable T resolveVariable(final String varName) {
	FacesContext context = FacesContext.getCurrentInstance();
	return (T) context.getApplication().getVariableResolver().resolveVariable(context, varName);
}

In the OSGiServletEnvironment and JakartaRuntimeEnvironment cases, though, I can use CDI instead:

1
2
3
4
5
@Override
public <T> @Nullable T resolveVariable(final String varName) {
	Instance<Object> instance = CDI.current().select(NamedLiteral.of(varName));
	return instance.isResolvable() ? (T)instance.get() : null;
}

It gets down to little things, too, like how the POST destination for form-based login can be /names.nsf?Login on Domino but /j_security_check on other webapp servers.

Seeing It Elsewhere

This sort of idiom is by no means anything I came up with. You can see it pretty frequently - in fact, I highlighted the way the IBM Commons stack does a very similar thing when running XPages outside Domino:

IBM Commons Platform hierarchy

This serves essentially the same purpose, being filled with mechanisms for getting output streams, finding resource locations, and retrieving named objects.

Regular Use

Should you implement something like this for most apps? Probably not, no - for most even-moderately-complex XPages applications, having some if tests in a central util to distinguish between XPages and OSGi servlets should be enough. I think it's a useful instructional example, though, and it sure was critical in getting this massive thing working outside Domino. As we make our apps more portable, this is the sort of technique we should keep in mind.

XPages: The UI Toolkit and the App Framework

Jun 17, 2020, 9:19 PM

Tags: java xpages
  1. XPages: The UI Toolkit and the App Framework
  2. The RuntimeEnvironment Idiom
  3. NSF ODP Tooling 3.1.0: Dynamically Including Web Resources

Lately, one of my client projects has been picking up the pace on the years-long effort of taking a giant XPages app, making the business logic portable, and incrementally cutting down on the "XPage-iness" of it all. I expect that this will be a recurring source of blog posts, and this one is about distinguishing between "XPages the UI toolkit" and "XPages the web app framework".

XPages in an NSF

Coming from a Domino perspective, there is no distinction between the two, and that's largely because of the path we took to get here. Other than existing inside the same NSF, the distinction between "classic" Notes apps (web or client) and XPages couldn't be more stark. Legacy design elements are only shared in ways that are completely divorced from their original UI presentation (for the better), and the runtimes - as much as legacy elements can be said to have a "runtime" - are entirely distinct.

An XPages app can be thought of as conceptually a "normal" Java WAR-based webapp housed inside an NSF, and it has a lot of the trappings: classes in WEB-INF/classes, libraries in WEB-INF/lib, and an OSGi-style WebContent folder for miscellaneous files. It's not technically a normal webapp - there's no "web.xml" and the XPages outer "LCD" runtime is actually more like one giant webapp that acts like many - but it's close.

Critically, though, the Domino HTTP router only routes requests for ".xsp" files or "/xsp/" folders to your app's XPages environment, and this is the biggest technical and conceptual impediment. You can't (within an NSF) intercept just any incoming request and process it as you would in a normal webapp. You can kind of shim your way into it with servletFactory, but it's a fiddly process and limited to "/xsp/..." URLs.

Additionally, a running XPages app only exists in a very constrained way between requests. While the JSF-level "Application" and the Servlet-level session exist, you don't work with a per-app ServletContext the way you do in a Servlet webapp. You can hook in with ApplicationListeners and similar constructs, but they're still based on the lifecycle of the XPages app, which comes into existence only on the first request and dies (usually) half an hour after the last.

These, plus the specifics of Domino data access, combine to make the "XAgent" - an abomination of a concept - the catchall replacement for specialized rendering, batch processing, and even scheduled tasks.

XPages the View Engine

These are all accidents of history, though. They stem from the firm requirement that existing Domino HTTP behavior remain intact even with its brain transplant, as well as the "soft" requirement that XPages in an NSF pretend to be "forms with repeats and partial refresh".

At its core, XPages is "just" a web view engine: its only job is to accept a request from an HTTP client and return some HTML. The concepts it uses to accomplish this - components, renderers, managed beans, themes - are all incidental to the main task. This is the "V" part of MVC. Admittedly, even without the NSF compromises, XPages bleeds beyond its assigned third of the triad, and it inherited this from JSF. JSF is also billed as MVC, but it completely subsumes the "Controller" part and partially eats the "Model" part with its bean management.

Still, though, even a domineering framework like JSF slots in as just one component of a normal webapp, rather than being the whole thing as XPages is in an NSF. For example, take the app behind this blog, which partially looks like this:

Java and JSP resources in the blog

It uses JSP as its view template engine, but is it a "JSP app"? Not really. The fact that it uses MVC 1.0 is more important to understanding it, but that's really an extension to JAX-RS. You could make a strong case that it's a "JAX-RS app", especially when you expand the "Services" section in Eclipse:

REST services in the blog

That covers more of it, but still leaves parts out. It has application-wide beans by way of CDI, entirely-UI-free scheduled tasks kicked off from a ServletContextListener, and core business logic and model objects that are kept in a module that doesn't even know about the Servlet API.

It's layered, but the layers are explicable and the distinctions create a tremendous amount of flexibility. I could, if I wanted, change to ThymeLeaf for the front end with essentially no friction, JSF or Vaadin with only mildly more, or to a client JS REST UI by chopping off the top two layers outright.

Okay, So?

This description isn't a call to action - there's nothing inherently wrong about an XPages app in an NSF, especially a small-to-medium one - but this will be an important part of the conceptual groundwork in the months to come. To figure out what to do with all these piles of XSP markup and framework-specific business logic we have, we'll have to do a lot of deconstruction.

Java ClassLoaders

Jun 5, 2020, 10:47 AM

Tags: java osgi xpages
  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI
  4. The Myriad Idioms For Finding Implementations In Java

In my last post, I casually mentioned the concept of ClassLoaders a couple times, and I think that they deserve their own post. ClassLoaders are exactly the kind of thing where, once you do Java long enough, you start to take for granted, but which aren't necessarily immediately obvious for people not as immersed.

The Basics

The core job of a ClassLoader is what it says on the tin: it loads classes. Say you have this bit of code using a class from the core Java library:

1
long now = System.currentTimeMillis();

This uses two types: long, which is a built-in primitive type and not a class at all, and java.lang.System. long doesn't have to come from anywhere, but java.lang.System does, and that's the job of a ClassLoader. In this case, the Java VM will ask the contextual ClassLoader for a class by that name, and the ClassLoader will (at least in Java 8 - things got weird later) look into the core library and find a file named "java/lang/System.class" within "rt.jar", parse its binary contents into an executable class, and hand it back to the VM.

ClassLoaders are also the source of two problem reports you've likely seen: ClassNotFoundException and NoClassDefFoundError. These two basically mean the same thing: the running app tried to load a class by name, but it wasn't found - they just differ in context (the former generally when a class is asked for dynamically, the latter when it's referenced as part of compiled code). This sort of thing can occur when you write code using a class that's present in your development environment but is not present when run later - among XPages developers, this happens quite a bit when people drop some JARs into jvm/lib/ext in their Designer installation but don't do the same on Domino.

Resource Loading

In addition to finding classes, ClassLoaders have a few other tasks, the main one of which of interest to us is loading resources. In my previous post, I talked about how ServiceLoader looks for service files by a given name, like META-INF/services/com.sprockets.data.FizzBuzzConverter. It does this by checking with the current ClassLoader and calling cl.getResources("META-INF/services/com.sprockets.data.FizzBuzzConverter"), which will return a listing of resources from JARs (and JAR-like sources, like an NSF) that it knows about matching that name. In that way, multiple JARs can declare services with the same name without conflicting.

ClassLoader Trees

Though conceptually your running program has "a ClassLoader", in reality it's almost definitely a chained series of ClassLoaders, rooted in the core system ClassLoader and then drilling down more specifically to your app's code. For example, take an application running in Apache Tomcat. In that case, Tomcat's documentation describes four basic tiers:

  • The core JVM ("bootstrap") ClassLoader that comes with any running Java program. As Tomcat's docs note, this implementation may vary
  • The central ("system") ClassLoader that contains the "just above the metal" classes, such as those you may add in the "CLASSPATH" environment variable
  • The Tomcat-specific ("common") ClassLoader, containing classes shared among all running applications. For example, javax.servlet.Servlet would be found here
  • Your app's ClassLoader, containing classes you write as well as any third-party JARs you bundled into your WAR file in WEB-INF/lib

When your code executes and requests a new class, the runtime will check first with your app's local ClassLoader and return what it finds there if present - if the class isn't present there, then that ClassLoader will delegate up to its parent, and so forth until it either finds a class or hits the root and throws a NoClassDefFoundError.

The way that each app has its own ClassLoader is also how you can have multiple apps on the same server that can each know about common core classes, but don't step on each others' toes with their own custom classes. Though javax.servlet.Servlet is the same class for two running apps, one app could have an internal class named "com.example.SomeBusinessLogic" and it wouldn't be visible by other running apps.

Dynamic ClassLoaders

Though the normal case of ClassLoaders is that sort of "do I have this class? If not, ask my parent" chain, the fact that a ClassLoader is itself a custom Java class means that its behavior can be pretty arbitrary. This is present in a normal web app ClassLoader: it knows to look in the WEB-INF/classes path within the WAR file instead of the normal behavior of checking from the root of a JAR, and it knows how to look in WEB-INF/lib for additional JARs to search.

In an XPages application, the active ClassLoader is roughly similar to Tomcat's app ClassLoader example, but with a couple additional capabilities. The main one is that the NSF's ClassLoader - an instance of com.ibm.domino.xsp.module.nsf.ModuleClassLoader - has knowledge of how to treat an NSF as if it were a WAR file. In Designer's "Package Explorer" pane, you get a view of the NSF that makes it look basically like a normal WAR, where classes go in WEB-INF/classes and JARs go in WEB-INF/lib. However, it's still really a nebulous pool of notes floating around, and so the ModuleClassLoader does design-collection lookups for file resources of various types and loads the class bytecode or resource data from there.

It also, in a move presumably designed to inconvenience me personally, has explicit restrictions on what classes it can load: even though it knows about, for example, org.eclipse or com.ibm.domino.napi classes, it has a check to explicitly bar loading these. That's why, even if you configure Designer to see those classes and compile XPages code that references them, they won't be available at runtime.

OSGi ClassLoaders

OSGi ClassLoaders are a particular kind of dynamic ClassLoader. In addition to the normal hierarchical view of the world, they take on special responsibilities for ensuring that your OSGi module (which an XPages app kind of is) sees classes from other bundles based on its dependency rules, but not necessarily their resources. For example, take rules like this in an OSGi bundle's META-INF/MANIFEST.MF:

1
2
Require-Bundle: com.ibm.xsp.core
Import-Package: com.ibm.commons.util

These simple lines hide some beguiling complexity. With this definition, a running class in your bundle will be able to see:

  • All classes at the system level, such as java.lang.System
  • All classes contained within and exported by "com.ibm.xsp.core", such as com.ibm.xsp.FacesExceptionEx and com.ibm.xsp.url.UrlHandler
    • There's also special behavior going on here, because those classes are contained within an embedded JAR in the bundle, referenced as Bundle-ClassPath: lwpd.xsp.core.jar - this is an OSGi-ism
    • Though this bundle lists all of its packages in its Export-Package header, this is not a requirement: it's common for an OSGi bundle to have classes internally that are not accessible from outside
  • All classes exported by its bundle dependency that it marks as visibility:=reexport: "com.ibm.pvc.servlet" and "com.ibm.designer.lib.jsf"
    • This is why you can have a dependency on just "com.ibm.xsp.core" and access javax.faces.context.FacesContext even though it's not in the core XSP bundle
    • This is also transitive, though neither of those re-exported dependencies themselves re-export any dependencies
  • The classes from the "com.ibm.commons" bundle in the "com.ibm.commons.util" package. This means that com.ibm.commons.util.StringUtil is visible, but com.ibm.commons.extension.ExtensionManager is not, despite both being within the same bundle JAR

There are also tons of weird visibility and dependency details as well in OSGi, but that's the gist of it. Note that I specifically mentioned that the resources aren't visible. Though the Require-Bundle: com.ibm.xsp.core line makes all classes exported from the XSP core visible to your code, calling ServiceLoader.load(com.ibm.xsp.acf.HtmlFilteringFactory.class) will not find the DefaultHtmlFilteringFactory implementation declared in there, even though it's done in a ServiceLoader-compatible way. This is why IBM Commons papers over that difference with its "plugin.xml" extension declarations. OSGi actually contains a Service Loader Mediator specification to bridge this gap, but Domino doesn't include an implementation of that part.

Fragment Bundles

There's one special case with OSGi bundles that's worth highlighting: fragments. Normally, each bundle effectively has its own ClassLoader space, walled off from all others by OSGi's broker. However, if you declare your bundle as having a Fragment-Host of another active bundle, your code acts as if it's within the parent, gaining access to not just all of the parent bundle's classes, but also its non-class resources. Moreover, this works in the reverse: the parent also gains access to the fragment's classes as resources, though it generally won't "know" about them at the time of development.

This is a technique that's come in handy for me many times, in particular in cases like the XPages Jakarta EE Support project, where API bundles will use ServiceLoader to find their implementations. In those cases, one of the ways I get it to work in OSGi is to create a fragment bundle out of the implementation, meaning that the bundles remain distinct but now the API can find the META-INF/services files and classes it needs to operate.

This has a good number of other uses, too, such as providing platform-specific native code to an otherwise-platform-independent core bundle. The Notes.jar wrapper used in XPages land uses this type of technique. Though, to my knowledge, Notes.jar doesn't contain any actual native code, it's still delivered in two pieces:

  1. The "com.ibm.notes.java.api" bundle, which lists all of the exported packages but holds no code itself
  2. The "com.ibm.notes.java.api.win32.linux" bundle, which contains the actual Notes.jar and declares Fragment-Host: com.ibm.notes.java.api
    • I'm not sure why this is the case, but maybe Notes.jar is different on System i or something

If you have a bundle that needs access to lotus.domino classes, you then can either do Require-Bundle: com.ibm.notes.java.api or Import-Package: lotus.domino and it'll be resolved out of the fragment. There's also an Eclipse-ism in here: the first bundle has Eclipse-ExtensibleAPI: true, which is a tip-off to the IDE that it should specifically allow fragments to contribute available classes to the development environment. This is generally required when developing with Eclipse's plug-in tooling (shared with Designer), but it's not actually enforced one way or the other by the runtime.

Wrapping It Up

This is all definitely in the category of "you don't normally need to worry about it, but it's very helpful to know", like the previous ServiceLoader topic. Until you're implementing some low-level stuff, you're not likely to interact with the ClassLoader directly, especially to a level beyond finding Thread.currentThread().getContextClassLoader() or Foo.class.getClassLoader(). Knowing about it can help make clear what's going on in situations where a class shows up in development but not at runtime, or when the XPages ClassLoader tries to get to fancy and throws up on itself.

Java Services (Not the RESTful Kind)

Jun 4, 2020, 4:42 PM

Tags: java
  1. Java Services (Not the RESTful Kind)
  2. Java ClassLoaders
  3. Managed Beans to CDI
  4. The Myriad Idioms For Finding Implementations In Java

The concept of "services" in Java is fairly critical, but, especially with the XPages stack we've grown used to, the term covers quite a few different technologies.

Definition

Before I continue on, I want to make clear what I mean by "service" in this context. It's unrelated to REST services or even remote access of any kind; instead, it's about how an app can find implementations of some kind of class or interface within its runtime.

A very-common type of this sort of thing is a data adapter or converter. Say you have your own object FizzBuzz that you use within your app, one that represents data storable in multiple ways. One way to handle converting from various types to FizzBuzz would be a giant if tree, like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
public FizzBuzz convert(Object input) {
	if(input instanceof String) {
		// ...
	} else if(input instanceof JsonObject) {
		// ...
	} else if(input instanceof org.w3c.dom.Document) {
		// ...
	} else {
		throw new IllegalArgumentException("Cannot convert to FizzBuzz: " + input);
	}
}

That'd work well enough, especially for a small app. You can imagine, though, how this might get out of hand in an even moderately-complicated case, with the if tree turning into a tangled mess. Moreover, this doesn't allow for any extensibility without directly modifying the convert method - any new type will have to go into this, making management of a large team more cumbersome and completely cutting off the possibility of third-party additions.

So, to keep things scalable, it'd make sense to create an interface that would specify a generic way to convert some type of object to a FizzBuzz:

1
2
3
4
5
6
package com.sprockets.data;

public interface FizzBuzzConverter {
	boolean canConvert(Object o);
	FizzBuzz convert(Object o);
}

Then the code that actually needs to convert would look more like this:

1
2
3
4
5
6
7
8
public FizzBuzz convert(Object input) {
	Stream<FizzBuzzConverter> converters = moreOnHowToFetchLater();
	return converters
		.filter(converter -> converter.canConvert(input))
		.findFirst()
		.map(converter -> converter.convert(input))
		.orElseThrow(() -> new IllegalArgumentException("Cannot convert to FizzBuzz: " + input));
}

In a small case like this, that's not necessarily going to be a big deal, but it doesn't take too long for it to become desirable to break it apart. Take the case of JAX-RS providers, which do exactly this kind of entity conversion when processing HTTP requests. Everything over HTTP comes in as plain text (more or less), but programmers want to be able to accept an int input parameter, or to automatically convert their custom business-logic object to JSON. Without a separation like this, the code to handle all known types would be impossible to manage all in one place, and there'd be no way to handle custom types that didn't exist when the code was written.

Types

There are quite a few distinct types of services that I've run across, and I'll list them here in roughly the likelihood that a programmer coming from an XPages background will encounter them.

ServiceLoader Services

This is the most-common kind of service you're likely to encounter in a Java application, and you can generally identify it by its use of the META-INF/services directory inside a JAR. java.util.ServiceLoader itself was added to Java in 1.6 but was designed to codify habits that become common beforehand.

The way this works is designed to be simple: you create a plain-text file within META-INF/services named after the service class you're implementing, and then put the names of your implementing classes within it, one on each line. So, in our above example, you'd create a file named META-INF/services/com.sprockets.data.FizzBuzzConverter and fill it with something like:

com.sprockets.data.impl.StringFizzBuzzConverter
com.sprockets.data.impl.JsonObjectFizzBuzzConverter
com.sprockets.data.impl.DomDocumentFizzBuzzConverter

Code that calls ServiceLoader.load(FizzBuzzConverter.class) will find all of those files within the current ClassLoader space (more fun with that down the line) and instantiate the named classes, returning an Iterator to loop through them.

IBM Commons Services

Within the XPages stack, the bulk of service interactions are managed by the IBM Commons ExtensionManager class, which is a generic way to ask for a service type by String name.

In the normal case, this acts as a slightly-old-timey variant of the now-standard ServiceLoader mechanism, likely by dint of preceding the standard's introduction. Like ServiceLoader, it looks for files with the name you pass it in the META-INF/services directory in your app and adds instances of all the names it finds within.

What makes it important (and what gives it longevity in non-XPages OSGi apps on Domino) is that it also bridges into the Equinox OSGi service infrastructure when available and looks for services registered there by the com.ibm.commons.Extension name. The reason this is important is that, in an OSGi context, one bundle can't by default see the files in another bundle in its ClassLoader, which means that services registered via META-INF/services in one won't be picked up by a ServiceLoader call in another.

Since XPages's life spanned a pre-OSGi era and the 8.5.2 "Extensibility API" era, it bears the signifiers of both, smoothly papered over by IBM Commons:

com.ibm.xsp.core bundle services

The Equinox loader looks in both places, in fact, which is why you can declare XPages services within an application using META-INF/services as well as within an OSGi bundle's plugin.xml file.

Equinox plugin.xml Extensions

I mentioned above that IBM Commons bridges the difference between ServiceLoader and Equinox, but now I'd better go into a little more detail about the latter.

"Equinox" refers to the particular OSGi implementation that underlies both Eclipse-the-IDE (and thus Notes) and Domino's web stack. While Equinox is the fully-fledged reference implementation of OSGi, plugin.xml is specific to it and I believe pre-dates Eclipse's migration to OSGi (which we still see reflected in 9.0.1FP10+'s plugin trouble).

plugin.xml used to house a lot of information that was moved over to META-INF/MANIFEST.MF, but its primary remaining function is to declare services for the Equinox environment. Eclipse itself uses this extensively, and it remains the primary way to extend the IDE's capabilities.

One important thing to note here is that plugin.xml's extensions aren't limited to just providing a service class implementation. While many do that, it's also used heavily to provide configuration information without executable classes at all.

Multi-type "FactoryFinder" style

This type of service locator is similar to the IBM Commons ExtensionManager, but is usually confined to an individual domain, like a specific Jakarta EE spec. The way this idiom works is that there's a central coordinating class, usually named FactoryFinder, whose job it is to locate implementations of services from one or more sources, and often using a known fallback implementation.

I encountered one of these when diving deep into the XPages stack. javax.faces.FactoryFinder is responsible for finding implementations of very-low-level entities, like the services that spit out JSF applications at the start of initialization, or those that create FacesContext objects.

These will often have specialized behavior. For example, the standard SOAP API looks through a system property, then an external "jaxm.properties" file, then ServiceLoader, then an older META-INF/services name, then OSGi, and finally falls back to a default class name.

Java 9 Modules

I have to admit that I haven't actually used this, but it's too important to skip. Java 9 and above include a module system that is sort of like an OSGi bundle in that it lets you declare what your module exports and other characteristics about its interactions with the outside world.

Along with this support came a new way to declare services. Since this is also baked in to Java itself, it gets the advantage of also working with ServiceLoader. In this case, instead of writing a text file in META-INF/services, you declare the type of service you're providing and the class implementing it in the module definition. This not only unifies the service with other module information, but it also makes it more type-safe and programmatically clear. It's neat-looking.

OSGi

I already mentioned that Equinox can use the "plugin.xml" file to do cross-bundle services in OSGi, but I also mentioned that it's specific to that one implementation and not actually part of the OSGi spec.

Instead, OSGi has a couple (for some reason) standard mechanisms for providing and consuming services. I encountered these mechanisms in practice when I created a UserRegistry implementation for Open Liberty.

In my first version, I declared my services programmatically in the bundle's activator (which is a class that you can write to run when your bundle is loaded/unloaded). In that way, you can dynamically tell the runtime that your bundle provides any number of services.

In my second revision, I changed to using what's dubbed Declarative Services. These do basically the same thing, but are defined for the runtime in a combination of the META-INF/MANIFEST.MF file and some service-definition files in the bundle - essentially, like a re-thought version of plugin.xml.

Summary

Okay! So, what's the upshot? Well, in my work, I use the first two all the time: inside a non-Domino app, META-INF/services is king; when working with Domino, IBM Commons ExtensionManager handles everything I need.

As far as implementing your own services, it's definitely a critical concept to keep in your pocket (I can only assume it's somewhere in Design Patterns). You could certainly go crazy with it and make a real mess of incomprehensible indirection, but it's probably useful more often than you'd think at first. Give it a shot next time you find yourself writing a big if tree with complicated branches.

My Active Open-Source Projects

May 8, 2020, 11:01 AM

Over the years, I've spawned a number of open source projects, both in my personal GitHub account and in OpenNTF's, but it'd be fair to say that not all of them are actively updated or see common use.

Nowadays, I have a set of tools that I actively develop (either solo or with a team) and which make up critical parts of my development infrastructure, and I figured it'd be useful to give an overview of them.

NSF ODP Tooling

This is my current favorite project by virtue of how much time it saves me every day and for its future potential. I wrote a series on this project a while ago, so I won't go over all the details of it here. The gist of it, though, is that this project lets me have a Maven tree for one of my big client projects that includes an array of OSGi bundles and have the Maven install project build all of those, assemble an update site with them and a bevy of dependencies, compile over a dozen NSFs (most with complicated Java code), and end up with a distribution ZIP containing importable update sites and deployable NTFs, all from my Mac with no Designer involved.

I have visions of this project forming the central infrastructure for a post-Designer world, and that's shaping up in a couple ways so far. One of those ways is the DXL and XPages LSP contributor component that allows for pretty-solid editing of, uh, DXL and XPages in tools that use the XML Language Server, such as Eclipse and Visual Studio Code. And that plays in to the other project I use daily, the XPages JEE Runtime.

XPages JEE Runtime

This is the project that started as a frenzied descent into madness and which I eventually hammered into shape enough to run real apps (with a side path where I also got XPages running on Android and iOS).

Now, this is the main way I do development on that client app. I have an Open Liberty server set up in Eclipse and a webapp variant of the XPages app that points to the same XPages, Custom Controls, and Java code from the NSF's ODP representation, and I have some hooks to direct all database references to the DB running in my dev VM. Since it's not a 100% perfect representation of the Domino environment, I still need to periodically sync it back to the NSF and test how it runs in there (and with the OSGi environment that I'm not using in the webapp), but I'm experienced enough at this point to generally know the potential pitfalls.

There's also a dark part of me that keeps being tempted to actually use this for production at some point, since it works so well now, and pushes aside so many hassles of loading and deploying on Domino itself. That would play in to the next project, the one that's hosting this very blog right now.

Domino Open Liberty Runtime

This is my project where I set up a sidecar Open Liberty instance alongside Domino, which allows for using native local NSF access while also having a full, modern Jakarta EE server with all the bells and whistles.

Though this project is a bit more staid than some of the others, I've gone in and made some interesting improvements lately. One was my journey into RunJava the other month, which I still think is a little too cute to put into production, but which actually should do the job just fine.

The other improvement, though, has some more immediate benefits. I added the ability to specify and auto-download AdoptOpenJDK Java runtimes to use instead of Domino's provided JVM. These runtimes still gain the same benefit of running with local Domino NSF access, but aren't constrained by Domino's once-again-long-in-the-tooth JVM. So you can, for example, specify that you'd rather bring in Java 14 and the runtime will auto-download it for you and launch Liberty using that. I haven't quite rolled that one out to this blog server yet, but it's on the docket. I'd love to bring in Java records, for example, and now there's nothing stopping me from doing so.

XPages Jakarta EE Support

I didn't have a good segue for this one.

This is a project I started a couple years ago initially as a way to expand on Martin Pradny's original plugin to make writing JAX-RS resources inside an NSF easy. It's grown into my project to essentially try to bring the XPages runtime up to code, at least in the parts that I want to use for work. Though it's constrained by the hard limit of the ancient Servlet API Domino's container provides, I've been able to bring in some important updates for EL and JAX-RS, and also allow for using CDI for managed beans and JAX-RS resources.

CDI is actually a whole huge topic that I have some draft posts for. As far as Java development is concerned, CDI is Important with a capital "I".

ODA

There's not a lot of fanfare with the OpenNTF Domino API, but that's largely intentional: as an improvement on the normal lsxbe API, it does its job and doesn't currently need any radical changes. I'm mostly including it here because, though it doesn't change much, it's periodically updated to cover the sprinkling of new Java methods HCL adds with each release.

generate-domino-update-site

While I don't use this project as such daily, I sure do benefit from its output. This is the Maven plugin that generates new update sites, which is required for up-to-date OSGi development for Domino in lieu of IBM/HCL ever updating their own release.

Other than being something I run every new Domino release, I've also made some improvements recently. Some of those just related to improving behavior in edge cases, but a nice one I added the other week was downloading of source components from Eclipse Neon. Though the source for the XPages runtime and the whole Expeditor scaffolding remain unavailable, I am able to look up and download the source for the unmodified Eclipse components, and this results in a more-pleasant development experience in Eclipse.


I have a few other projects that I use periodically, such as the NSF File Server, but those are the big-ticket ones.

Lessons From Fiddling With RunJava

Mar 3, 2020, 9:49 AM

Tags: java websphere

The other day, Paul Withers wrote a blog post about RunJava, which is a very-old and very-undocumented mechanism for running arbitrary Java tasks in a manner similar to a C-based addin. I had vaguely known this was there for a long time, but for some reason I had never looked into it. So, for both my sake and general knowledge, I'll frame it in a time line.

History

I'm guessing that RunJava was added in the R5 era, presumably to allow IBM to use existing Java code or programmers for writing server addins (with ISpy being the main known one), and possibly as a side effect of the early push for "Java everywhere" in Domino that fell prey to strategy tax.

Years later, David Taib made the JAVADDIN project as a "grown up" version of this sort of thing, bringing the structure of OSGi to the idea. Eventually, that morphed into DOTS, which became more-or-less supported in the "Social Edition" days before meeting a quiet death in Domino 11.

The main distinction between RunJava and DOTS (other than RunJava still shipping with Domino) is the thickness of the layer above C. DOTS loads an Equinox OSGi runtime very similar to the XPages environment, bringing in all of the framework support and dependencies, as well as services of its own for scheduled task and other options. RunJava, on the other hand, is an extremely-thin layer over what writing an addin in C is like: you use the public static void main structure from runnable Java classes and you're given a runNotes method that are directly equivalent to the main and AddinMain function used by C/C++ addins.

Utility

Reading back up on RunJava got my brain ticking, and it primarily made me realize that this could be a perfect fit for the Open Liberty Runtime project. That project uses the XPages runtime's HttpService class to load immediately at HTTP start and remain resident for the duration of the lifecycle, but it's really a parasite: other than an authentication-helper servlet, the fact that it's running in nHTTP is just because that's the easiest way to run complicated, long-running Java code. For a while, I considered DOTS for this task, but it was never a high priority and has aged out of usefulness.

So I decided to roll up my sleeves and give RunJava a shot. Fortunately, I was pretty well-prepared: I've been doing a lot of C-level stuff lately, so the concepts and functions are familiar. The main run loop uses a message queue, for which Notes.jar provides an extremely-thin wrapper in the form of lotus.notes.internal.MessageQueue. And, as Paul reminded me, I had actually done basically this same thing before, years ago, when I wrote a RunJava addin to maintain a Minecraft server alongside Domino. I'd forgotten about that thing.

Lessons

Getting to the thrust of this post, I think it's worth sharing some of the steps I took and lessons I learned writing this, since RunJava is in a lot of ways much more hostile a place for code than the cozy embrace of Equinox.

#1: Don't Do This

The main lesson to learn is that you probably don't want to write a RunJava task. It was already the case that DOTS was too esoteric to use except for those with particular talent and needs, and that one at least had the advantage of being kind-of documented and kind-of open source. RunJava gives you almost no affordances and imposes severe restrictions, so it's really just meant for a situation where you were otherwise going to write an addin in C but don't want to have to set up a half-dozen compiler toolchains.

#2: Lower Your Dependencies Dramatically

The first big general thing to keep in mind is that RunJava tasks, if they're not just a single Java class file, are deployed right to the main domino JRE, either in jvm/lib/ext or in ndext. What this means is that any class you include in your package will be present in absolutely everything Java-related on Domino, which means you're in a minefield if you want to bring in any logging packages or third-party frameworks that could conflict with something present in the XPages stack or in your own higher-level Java code.

This is a fiddlier problem than you'd think. A release or so ago, IBM or HCL added a version of Guava to the ndext folder and it wreaked havoc on the version my client's app was using (which I think came along for the ride from ODA). You can easily get into situations where one class for a library is loaded from XPages-level code and another is loaded from this low level, and you'll end up with mysterious errors.

Ideally, you want no possible class conflicts at all. I took the approach of outright white-labeling some (compatibly-licensed) code from Apache and IBM Commons to avoid any possibility of butting heads with other code on the server. I was also originally going to use the Darwino NAPI or Domino JNA for a nicer Message Queue implementation, but scuttled that idea for this reason. It's Notes.jar or bust for safe API access, unfortunately.

#3: Use the maven-shade-plugin

This goes along with the above, but it's more a good tool than a dire warning. The maven-shade-plugin is a standard plugin for a Maven build that lets you blend together the contents of multiple JARs into one, so you don't have to have a big pool of JARs to copy around. That on its own is handy for deployment, but the plugin also lets you rename classes and aggregate and transform resources, which can be indispensable capabilities when making a safe project.

#4: Make Sure Static Initializers and Constructors are Clean

What I mean by this one is that you should make sure that your JavaServerAddin subclass does very little during class loading and instantiation. The reason I say this is that, until your class is actually loaded and running, the only diagnostic information you'll get is that RunJava will say that it can't find your class by name - a message indistinguishable from the case of your class not even being on the server at all. So if, for example, your class references another class that's missing or unresolvable at load time (say, pointing at a class that implements org.osgi.framework.BundleActivator, to pick one I hit), RunJava will act like your code isn't even there. That can make it extremely difficult to tell what you're doing wrong. So I found it best to make very little static other than JVM-provided classes and to delay creation/lookup of other objects and resources (say, translation bundles) until it was in the runNotes method. Once the code reaches that point, you'll be able to get stack traces on failure, so debugging becomes okay again.

#5: Take Care With Threads When Terminating

The Open Liberty runtime makes good use of java.util.concurrent.ExecutorServices to run NotesThread code asynchronously, and I'll periodically execute even a synchronous task in there to make sure I'm working with a properly-initialized thread.

However, when terminating, these services will start to shut down and reject new tasks. So if, for example, you had code that executes on a separate thread and might be run during shutdown, that will fail likely-silently and can cause your addin to choke the server.

#6: That Said, It's a Good Idea to Use Threads

A habit I picked up from writing Darwino's cluster replicator is to make your addin's main Message Queue loop very simple and to send messages off to a worker thread to handle. Doing this means that, for complex operations, the server console and the user won't sit waiting on a reply while your code churns through an individual message.

In my case, I created a single-thread ExecutorService and have my main loop immediately pass along all incoming commands to it. That way, the command runner is itself essentially synchronous, but your queue watcher can resume polling immediately. This keeps things responsive and avoids the potential case of the message queue filling up if there's a very-long-running task (though that's less likely here than if you're drinking from the EM fire hose).

#7: Really, Don't Do This

My final tip is that you should scroll back up and heed my advice from #1: it's almost definitely not worth writing a RunJava addin. This is a special case because a) the goal of the project is to essentially be a server addin anyway and b) I was curious, but normally it's best to use the HttpService route if you need a persistent task.

It's kind of fun, though.

Targeting Domino for Webapps Incidentally

Feb 11, 2020, 5:26 PM

Tags: java maven

I recently had occasion to break ground on a new web project that uses a Notes runtime and has a web front end, and I figured it would be a perfect occasion to structure it in a way that is clean, portable, and, while it will run on Domino, doesn't have to use Tycho.

I ended up coming up with a setup that I'm pretty happy with, and so I put up an example on GitHub for anyone else to use as a reference for similar cases.

What Is This, Specifically?

This is an application that consists of a couple main concepts:

  • Maven for project structure and dependencies
  • Core "plain Java" module that contains code that's intended to be portable and doesn't even know it's in a web app
  • JAX-RS-based REST API
  • Client JS web UI written in Stencil and transpiled with Node
  • Standard webapp project for JEE containers such as Liberty
  • Domino project to wrap the app up as an OSGi bundle

What this is specifically not is an XPages project. And, while it can use a Notes runtime and access NSFs, it's also not something that will be stashed inside an NSF, and the "Notes" part is optional and really only included here to show it's possible. The idea is that this is a standard web app first and a Domino thing second.

Project Structure

The project is organized as a Maven module tree like so:

  • domino-webapp: The parent container project just for configuration
    • core
      • webapp-core: This is the main place for UI-independent business logic
    • web
      • webapp-api-jaxrs: This contains the JAX-RS-based REST API, which exposes the core business logic to the web
      • webapp-webui: This contains a Stencil-based JavaScript app. It doesn't need to be Stencil specifically, or even NPM-based at all, but I find Stencil to be a pretty good choice for this
      • webapp-jee: This is the JEE-container web app, containing very little code of its own and just intended to output a WAR
    • domino
      • webapp-domino: This is the Domino equivalent to the previous project, but contains a chunk of adapter code to get things working, plus some Maven configuration to generate an appropriate OSGi bundle
      • webapp-dist-domino: This is a distribution project that pulls in the Domino OSGi bundle and creates a p2 repository, and then a "site.xml" file for the benefit of importing into an NSF Update Site

How the OSGi Part Works

In going deeper into what's going on, I'm going to start at the end: how to go from a normal web app to a Domino-friendly OSGi bundle. If you're not familiar with what I mean by "web app" in general and in a Domino plugin in particular, it's the sort of thing that Sven Hasselbach wrote a series about a few years back: a Java/Jakarta EE Servlet application using the "WebContainer" extension point in the Domino HTTP runtime.

Traditionally, these projects are built as plain-old Eclipse projects, where you drop a bunch of JARs for your framework of choice into a plug-in project and write your code in there, using Eclipse's Plug-in Development Environment. This works well enough as far as it goes, but puts constraints on how you do development, in particular pretty much requiring Tycho if transitioned to a Maven structure, which would then have massive penalties for the rest of your project.

Fortunately, the thing about an OSGi bundle is that it's really just a JAR file with special metadata, and so it doesn't actually have to be created with a toolchain that has full knowledge of OSGi. As long as the required files end up in the right places inside the JAR (which is in turn just a ZIP file), you're good to go.

In this case, I used the maven-bundle-plugin to decorate the "MANIFEST.MF" file with appropriate OSGi metadata and, importantly, to embed all the compile-scoped project dependencies for me. That second part means that Maven will handle the job of steps 7-10 in Sven's example: it'll bring in the dependencies from Maven, copy them into the right place in the final JAR, and set up the Bundle-ClassPath header to point to them.

It's important to note the "compile-scoped" qualifier there. The Maven projects themselves also depend on a couple things that I know will be present on Domino already, namely IBM Commons, Apache Wink, the Web Container adapter, and Notes.jar. Though it'd probably work if I copied those into the JAR, that would be asking for trouble unnecessarily, so I mark them as "provided" in Maven, and then the bundling process knows to skip over them.

The other OSGi-specific element is the "plugin.xml" file, used by Domino's Equinox framework to identify that the bundle provides a web app. In this case, I put that file in "src/main/resources", where it ends up being copied to the root of the JAR. One down side here is that you have to know ahead of time what the syntax for this file is: since Eclipse won't know this is a plug-in project, you won't get the GUI shown in Sven's example.

There are some other Domino-specific considerations, but I'll return to them later. For now, those parts will cover the OSGi "bridge".

Core: Using the Notes API

The core project doesn't have a lot going on, and that's intentional. It does, though, demonstrate how you can use the JSON-B API for JSON serialization and the Notes API for accessing NSFs and other Notes stuff.

The important parts happen in the project dependencies. The first one is simple: I want to use the JSON-B API, but I was to declare that it will be provided one way or another by the environment. The second one includes Notes.jar by way of my P2 Repository Provider since it's still not available as a normal Maven dependency.

This project contains a single class, which just gathers a bit of information about the runtime environment to be shown as a JSON object. The important part here is my use of NotesThread when calling the Notes API. Since this project can run on non-Domino containers, I can't assume that all threads will already be Notes-friendly, so I use that route. You can also call NotesThread.sinitThread() or go other ways, but I like containing the calls into a separate thread outright in simple cases.

JAX-RS

The JAX-RS project is intended to contain JAX-RS configuration and resource classes, and the immediate part to note is once again the dependency set. Here, I targeted specifically JAX-RS 1.1, which is quite old, but is provided by Apache Wink on all Domino installations. I could theoretically bring in RESTEasy for a newer spec version, but 1.1 is capable enough for now and it keeps things simpler.

In the Application implementation class, I enumerate all of the resource classes used in the app. This is equivalent to the text-file-based method common in Wink apps, but it's portable across JAX-RS implementations and has the side benefit of being compiler-checked. However, though it's a step up from the old Wink way, it's a big step down from the modern JAX-RS way: in newer containers, you can just let the container find your resources by looking for classes with annotations automatically. However, that doesn't fly on Domino and, while you can hack in something roughly equivalent, it's simpler for now to just enumerate the classes explicitly and remember to add them to this list.

There are only two resources here: a Hello World resource and one to ferry the ServerInfo object out using the JAX-RS environment's JSON serializer (more on that in a bit).

The Web UI

The web UI project is complicated, but mostly because NPM-based JavaScript development is complicated. This example uses Stencil, which I quite like, but you can use whatever you'd like: React, Angular, just plain ol' HTML, or whatever.

The important parts here are the use of frontend-maven-plugin to create a Node+NPM environment and build the app and the specific configuration to put the output into "src/main/resources/META-INF/resources". Doing this means that, when this project is wrapped up into a Java-less JAR file, the web resources will be in the "META-INF/resources" directory, which is special on Servlet 3 and above. Any files in there in dependency JARs like this will be visible as if they were in the main web content of your web app.

JEE App

The Jakarta EE app is the simplest of the bunch, and the only actual class in there only exists for example purposes.

The work, such as it is, all happens in the Maven configuration. I declare it to be war-packaged, to not complain if there's no "web.xml" file, to bring in the project dependencies, and to specifically include IBM Commons. It also brings in Notes.jar as a compile-time dependency.

The Domino Shims

Back in the Domino module, it's time to talk about the non-OSGi parts. I've mentioned a few things above that require no configuration in a modern web container, but which will require a bit of legwork in Domino. These are generally related to the fact that Domino's servlet container is version 2.4 and it has no idea about newer standards.

  • I bring in a Eclipse Yasson dependency to provide JSON-B support.
    • To bind that to JAX-RS, I wrote a Provider class that knows how to turn any Java object into JSON when a resource says it wants to output JSON.
    • To register that provider (since it can't be picked up automatically), I subclass the Application class to include it specifically.
  • The ResourcesServlet servlet mimics the Servlet 3 behavior of serving resources out of "META-INF/resources". This specific implementation isn't the best, since it doesn't provide any caching, but it gets the job done and means that the web UI JAR will work the same way on both targets.
  • The RootServlet servlet extends the Wink default REST servlet to shim the ClassLoader around, which avoids a lot of trouble with threads used for web app requests that had previously been used for XPages requests (it's annoying, trust me).
  • I have to include an explicit reference to Wink's JAX-RS provider for some reason to do with bundle class loading.
  • Unlike in the normal web app project, I have to include a "web.xml" file, and this one registers the two servlets above.

Domino Update Site

The second part of the Domino target is the distribution project, which uses the p2-maven-plugin to create a P2 repository. That plugin is a splendid tool for your toolbox and has a lot of capabilities for auto-OSGi-ifying otherwise-non-OSGi projects. In this case, I just want to include the Domino project from the previous step, but I also want to generate an Eclipse feature for it so that it can be imported into an NSF Update Site and with some proper metadata.

I also use the p2sitexml-maven-plugin, which takes the newer-style P2 site generated by the previous step and adds a "site.xml" file, which is needed by the NSF Update Site import process if you want to include categories, which I think are nice.

Seeing It In Action

To run the app on Domino, you can do a Maven install on the root, install the update site from the distribution project onto Domino, and then visit "/exampleapp/". You'll be greeted by a vision of beatuty like this:

Example Webapp Screenshot

Placeholder garishness aside, it shows the Stencil app loading, using the custom favicon, and making a call to the System Info service. That, in turn, shows using the Notes runtime to get the server's distinguished name. It's left as an exercise for the reader to then put in the thousands of hours of work to make a world-class application.

Caveats!

Since this is a Domino thing, there are important caveats.

The first is one I mentioned earlier: because we're restricted to Servlet 2.4/2.5ish, a lot of things just won't work. Indeed, not even all of the 2.4 spec works, as Filters aren't implemented for some reason. Additionally, outside of Servlet and JAX-RS 1.1, you're pretty much in "BYOB" territory when it comes to other JEE specs. In this example, I brought in Yasson for JSON-P and JSON-B and that was pretty simple, but others (say, CDI) would require a lot more fiddly work.

There's also an extra-special caveat when it comes to JSP. Domino's web container knows about JSP, but requires what it calls a "JSP compiler bridge": a special extension that allows for interpreting JSPs inside the special environment it creates. However, it doesn't actually ship with such a bridge. Notes does (and MyFaces too) for what I assume are "social" reasons, but Domino doesn't. You could probably nab the JSP stuff from Notes and drop it onto Domino, but you'd be getting into weird territory. I tried dropping Jasper into the app, but it ran into ClassLoader-casting trouble... hence the bridge, I guess.

Usefulness

Phew! Admittedly, it's a long walk to get to the point where you can just run a web app, and there are quicker ways to get there. However, I do think this is worth it. With this setup, I have a set of Maven projects that work swimmingly in Eclipse and any other Java IDE, a NPM project that acts like any other, and a JEE container front-end for rapid development. No Designer, no NSF syncing, no Plug-in Development Environment, no Tycho. And, though I don't have the full breadth of JEE available to me, JAX-RS is the main one you need for a client-JS app anyway. It's not an appropriate setup for every app, but it's really nice when it fits.

Domino 11's Java Switch Fallout

Jan 7, 2020, 10:50 AM

Tags: java
  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout

In Notes and Domino 11, HCL switched from using IBM's J9 Java distribution to using the OpenJ9 variant of AdoptOpenJDK. This is a lateral move technically - it's still Java 8 - and it's one presumably made in the short term to avoid licensing costs from IBM and in the long term to align better with AdoptOpenJDK.

However, OpenJ9 is not the same as J9, and AdoptOpenJDK is not the same distribution as the previous one, so there are some minor gotchas to look out for.

BASE64 and Other Internal Classes

A couple months back, I wrote a post describing this situation: namely, that some XPages and agents grew to depend on the presence of JVM-internal classes in the com.ibm namespace, particularly com.ibm.misc.BASE64Encoder and its decoder sibling.

The true fix for this is to ferret out uses of these classes in your code base, but that can be difficult. If you have to maintain legacy code, I made a small shim Jar you can drop on your server to map the two BASE64 classes to their sun.misc versions. I intentionally use those classes, even though they're also not for public use, both because they have the same semantics as the IBM ones and to reinforce that the best solution is to use the vendor-independent java.util.Base64 class.

java.pol

It's been fairly-common practice for a little while now to create a file named "java.pol" in the Java installation directory to loosen the security policy and get around Domino's bizarrely-strict interpretation of the rules. This came into vogue in favor of editing "java.policy" because this file was (usually) not overwritten during Notes/Domino version upgrades.

However, as Per Lausten discovered, AdoptOpenJDK's distribution does not reference this file, and so its policy changes won't take effect. The upshot of this is that there are three main options to loosen the policy:

  • As Per mentions (via Daniele Vistalli), you can create a file named ".java.policy" in the home directory of the user running Domino and it will be honored.
  • You can go back to editing the "java.policy" file, and re-editing it with each new release
  • You can modify "java.security" to reference "java.pol" again. This is kind of a wash, though, since you'll need to re-edit "java.security" every update anyway

Different Implementation Jars

This last one is much more limited in scope, and may actually be limited in effect to just the NSF ODP Tooling project. In that project, in order to create a Domino-compatible runtime environment for local compilation, I included a couple expected Jars from the Notes/Domino installation in the runtime's classpath. One of these was "ibmpkcs.jar", which covers both some security stuff but also the aforementioned BASE64 classes.

The fix in my case was to just make the resolution of that Jar optional, which should work for the normal case, but it'll be something to keep an eye on in the future.

Small Aside: Writing Agents With Java 5+ Features

Nov 26, 2019, 10:46 AM

Tags: designer java

The topic of using Java 5+ language features in agents came up recently in the OpenNTF Slack room (use this invitation link if you haven't already joined!), and I think that it's one of those topics that's worth making a post about for posterity's sake.

For historical and compatibility reasons, the default compiler language level for newly-created agents, at least in the absence of the JavaCompilerTarget notes.ini setting, is Java 1.3:

Java Agent Compiler Properties

This is laughably out-of-date, and doesn't even support 15-year-old features like generics. Accordingly, if you take a piece of code from my last post, you'll end up with a couple errors:

Java agent compilation problems

It recognizes the Java 7+ classes, since it's still backed by a Java 8 JDK, but it complains about try-with-resources and the use of varargs.

Setting the Compiler Level

The fix for this is to change the compiler compliance level for your agent project. Depending on the type of problem, you may be given an Eclipse quick fix option if you hover over the red-underlined text:

Eclipse quick fix

If you don't have that option (for example, there's no such option for the Files.createTempFile vararg problem), you can alternatively go to the "Package Explorer" view, find the temporary project for the agent (named something like "foo.nsf.Some Agent.ja"), and go to Properties. In there, you can go to the "Java Compiler" settings, make sure the "Compiler compliance level" is the level you want, and then check "Use default compliance settings":

Project compiler at 1.8

When you do this and next save the agent, you'll be greeted with this Designer-specific message box:

Java compiler backward compatibility warning

This is good to see, since it shows that Designer picked up the change and will write it into the agent note in items named $JavaCompilerSource and $JavaCompilerTarget. That's the part that really counts, since it's what Designer uses to re-construct the Eclipse project in the future.

Note, though, that the title of the message box is a reminder about an important aspect: if you set the target compiler level to 1.7 or above, then your agent will not run on Domino servers before 9.0.1 FP8. So, if you're using one of those for some reason, take care about the language features you're using. The other breakpoints, if you're working with some really-legacy servers, are Java 1.4 being added in 7, 1.5 added in 8, and 1.6 added in 8.5.

Dealing With Errors

Depending on your Notes version (I suspect this problem cropped up in 9.0.1FP10 and seems to be gone in the V11 beta), changing your compiler level may result in "forbidden reference" errors for basic things like the lotus.domino classes.

If you hit this, the quickest fix is to modify your settings in Designer's preferences, in the "Java" → "Compiler" → "Errors/Warnings" section:

Setting to ignore forbidden references

If you set "Forbidden reference (access rules)" in the "Deprecated and restricted API" section to "Ignore", the problems should go away. I'm not actually sure why this problem crops up (perhaps something to do with the structure of Notes.jar), but it was fairly consistent for me for a while.

With all that set, though, you should be free to use up to Java 8 features in agents (and web services, probably) to your heart's content.

Writing a Java NIO Filesystem, Part 1

Nov 25, 2019, 10:09 AM

Tags: java java-nio
  1. Writing a Java NIO Filesystem, Part 1

My recent project, the NSF SFTP File Store consists of two main parts: the SFTP server itself (powered by Apache Mina) and a Java NIO filesystem implementation. I casually mentioned the latter in my introductory post, but I think it's an interesting topic that warrants a post or two of its own.

Introduction to NIO

The term "Java NIO" refers to the non-blocking IO package added in two parts across Java 6 and 7 and can be considered a refresh of previous capabilities in a similar way to the Collections API in Java 2 replaced the handful of original collection classes.

The initial (and larger) part of it added in Java 6 added better capabilities for dealing with all sorts of "byte stuff": buffers of arbitrary types, smoother character-set handling, more-flexible streams, and so forth. I dealt with these constructs a while ago with the "nsfdata" package in ODA. In that case, it proved very useful for dealing with in-memory representations of Notes Composite Data, which is best dealt with as a navigable array of differently-sized structures.

The java.nio.file package was added in Java 7 and is a refresh of the older java.io.File/java.io.FileOutputStream/etc. system. It interacts well with the previous NIO stuff, but can actually be thought of as a distinct thing, despite its shared naming. This package changes around the mechanics of the original filesystem API, and I remember finding it kind of grating when I first encountered it. It's all in service of being more flexible, though, and it's one of those things where repeated exposure ended up making the older API feel weird and wrong.

Using the NIO Filesystem API

Before I get to the actual implementation of a backing filesystem, I think it'll make sense to show some examples of using the NIO filesystem API, especially since these can (and should) be used in any Java-based application today.

When the new classes were added, the older File class was augmented with a method to convert it to a java.nio.file.Path and vice-versa:

1
2
3
File foo = new File("/some/path/to/file");
Path fooPath = foo.toPath();
File fooFile = fooPath.toFile(); // for older-API interoperability

The Path interface is a very-lightweight and implementation-neutral representation of a path. Unlike the File class, it doesn't have methods for actually interacting with the filesystem. In fact, pretty much all you can do with it specifically is get other Paths based on it:

1
2
3
Path foo = Paths.get("/some/path/to/file");
Path parent = foo.getParent();   // "/some/path/to"
Path child = foo.resolve("bar"); // "/some/path/to/bar"

The primary way you use Path objects is via the Files utility class, which provides not only the methods you may be familiar with from File but also some additional ones like getting input and output streams:

1
2
3
4
5
6
7
Path foo = Files.createTempFile("test", ".tmp");
try(OutputStream os = Files.newOutputStream(foo, StandardOpenOption.TRUNCATE_EXISTING)) {
  os.write("hello".getBytes());
}
try(BufferedReader r = Files.newBufferedReader(foo)) {
  System.out.println("file contains " + r.readLine());
}

Here, you can see a couple things going on:

  • Files.newOutputStream replaces FileOutputStream, and so the object you assign it to should be declared as just OutputStream.
  • Many of the Files methods take zero or more open/move/etc. options, and they can be a little odd at first. The method parameters are declared as e.g. OpenOption, but the actual options are available in the StandardOpenOption enum. This is to allow for arbitrary extensions for custom filesystems. For example, you might write a custom option to force creating a backup version of the file before writing.
  • I'm using the try-with-resources syntax from Java 7 here. That isn't actually related to NIO except in that it came in the same release, but it's great and you should use it.

The Files class also contains methods for listing files and walking file trees, which operate with Streams (as of Java 8) and callbacks. For example, this bit from the NSF ODP Tooling walks a file tree using the callback method and stores it in a ZIP file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
try(OutputStream fos = Files.newOutputStream(result)) {
  try(ZipOutputStream zos = new ZipOutputStream(fos, StandardCharsets.UTF_8)) {
    zos.setLevel(Deflater.BEST_COMPRESSION);
    Files.walkFileTree(path, new SimpleFileVisitor<Path>() {
      @Override
      public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
        if(attrs.isRegularFile()) {
          Path relativePath = path.relativize(file);
          String unixPath = StreamSupport.stream(relativePath.spliterator(), false).map(String::valueOf).collect(Collectors.joining("/")); //$NON-NLS-1$
          ZipEntry entry = new ZipEntry(unixPath);
          zos.putNextEntry(entry);
          Files.copy(file, zos);
        }
        return FileVisitResult.CONTINUE;
      }
    });
  }
}

Next Up: Implementation

In my next post, I'll get into some specifics of implementing a filesystem using this framework as well as some of the implications of how flexible and straightforward it is.

Options for the Future of the Domino Open Liberty Runtime

Nov 18, 2019, 10:57 AM

Tags: java liberty
  1. Options for the Future of the Domino Open Liberty Runtime
  2. Next Steps With the Open Liberty Runtime
  3. Rapid Progress in Open-Liberty-Runtime Land

I've been thinking lately about what I can do with the Open Liberty Runtime project. If you're not familiar with it, the gist of it is that it gloms the Open Liberty Java/Jakarta EE app server onto Domino to allow deploying up-to-date WAR-based apps "to Domino", sharing the server's access to NSFs.

In its current form, it's serving me reasonably well: this blog is running on it, for example, and writing the SFTP server to target it has been a delight. I've been tracking a set of issues about it, mostly relating to improving the administration/deployment side as it is in ways that would improve my current use. There are some more-out-there ideas like running apps from NSFs, but for the most part it's been focused on "a JEE server attached to Domino".

Slightly-Tweaked Model

But what I've been thinking about lately is tweaking the model a bit to be less like "a new HTTP runtime" and more like a series of related servers, which would then be proxied to. This is in line with what Liberty has been trending towards anyway. Though it started out as essentially a cleaned-up WebSphere monolith, the fact that it's been targeted at cloud-service use has meant that it's been aggressively tuned to allow you to include only the specific features you need, with the idea that there will be many instances of Liberty running, one per app. This is also seen in their relentless push for faster startup times, something that only matters if you plan to start up instances of the server very frequently.

My Runtime project technically supports this model currently. Though it uses a single Liberty runtime download, the main Servers list is geared towards creating multiple running servers with their own app sets, which end up being distinct processes:

Open Liberty Admin Servers view

So one could hypothetically put dozens of servers in here and use it as I've been describing. Other than providing that capability, though, the tooling doesn't really help you much: in particular, it would do nothing to alleviate the problems of conflicting port mappings or unifying all of these apps into a single front. I did a little work making a reverse-proxy webapp, but it's really meant for the case of having one Liberty server with all of your apps, and wouldn't have any meaning in a many-server setup.

One option I've been considering here is giving the server side of this some knowledge of and control over the HTTP ports used by the individual Liberty servers, so that it could assign random ones automatically. Then, I'd also be able to add on a dynamically-configured reverse proxy that would know about these different running servers and could route requests automatically.

Other Runtimes

On top of that, there's nothing about the project that's really inherently tied to Open Liberty as such. It currently assumes that that's the runtime, but the only way it interacts with it is by pouring files out to the filesystem and then running some known shell scripts. There's nothing stopping such a setup from also gaining a bit of knowledge about, say, Node, Swift, or any number of other runtimes. Things would get progressively weirder the further you get from Java when you want to access the Domino runtime, but hey, that's what the C API is for!

Reinventing the Wheel

While mulling this over, though, I did realize one thing: I'm essentially describing a jankier version of Kubernetes. For the most part, the problem of running disparate apps with their own runtimes and needs managed by a central server is solved by that and Docker. This project would be a bit different in that the apps would automatically inherit a Domino runtime and would also (usefully) maintain clustered/replicated configuration by way of the Domino server backbone, so it's not exactly the same thing. And, while HCL is pushing for a Dockerized future, the pieces aren't there yet and won't fully be for a while: while Domino-on-Docker is a thing technically, "many Notes-runtime apps with many server IDs" for NRPC use is a licensing minefield, and the gRPC bindings are currently painfully limited in both capability and language support.

So I likely would be best off just letting time (which is to say, HCL) solve the fancier problems and just focusing on the Runtime's original job of being a better Java app server for Domino. I do think it would be useful to better support the one-server-per-app approach, especially for a hypothetical case of wanting to deploy an XPages app in one Liberty server but then a JSP or JSF app in another, and that'll take a better reverse proxy.

I think it's good to mull over, though. This already provides a good path to better app dev with Domino, and smoothing that out more will make my life all the better and will hopefully be useful to others.

Quick-Tip Thursday: Avoid Future Base64 Trouble In Java

Nov 14, 2019, 9:35 AM

Tags: java sntt

TL;DR: Don't use com.ibm.misc.BASE64Encoder or com.ibm.misc.BASE64Decoder.

If you've been doing Java development for a while, you've likely run into a situation where you need to Base64-encode or -decode something. It's used commonly in MIME documents, data URIs, and various other areas typically related to data transfer.

Unfortunately, using Base64 in Java was awkward for a long time, with no natural choice in the core JDK until Java 8. The trouble over the years, though, is that there are choices in previous releases, and it became somewhat common in the community to use sun.misc.BASE64Encoder for this need. The problem with that is that sun.* packages are officially not part of the JDK and can't be relied upon to exist in every Java implementation. Unfortunately, a combination of a lack of enforcement of this in the tooling and the fact that those classes are in practice pretty universal has let them creep into Java code.

On the Domino side, we have a second problem: com.ibm.misc.BASE64Encoder.

Sidebar: The Different Flavors of Java

For the most part, setting aside Android, it's safe to think of "Java" as a monolithic entity, where having a JVM of a given version number on one system will be equivalent to the same on another. There are subtle differences, though, and several vendors have long maintained their own variants of Sun/Oracle's JVM (which is named HotSpot).

IBM has maintained one such variant, called J9, and infused all of their Java-using products, Domino included, with it. J9 has since been open-sourced and donated to Eclipse and, renamed to OpenJ9, has carved out a niche for itself by being particularly lean and speedy to spin up.

The Snag We Hit With J9

For the most part, the differences in JVM flavors don't matter to developers, but IBM played a bit of a trick on us by making duplicates of some sun.* classes under the com.ibm.* package. Unlike sun.*, which at least has a little community knowledge of being internal-only (and also just looks odd compared to standard classes), com.ibm.* has no such connotation, and there's nothing in Designer or the classes themselves to indicate that com.ibm.misc.BASE64Encoder (internal and non-portable) is any different from, say, com.ibm.commons.util.io.base64.Base64 (portable via IBM Commons).

So, over the years, use of that class has crept into XPages and Java agent code - it does the job and it's always been safe to assume that Domino-the-IBM-product would use J9-the-IBM-JVM. Domino isn't an IBM product anymore, however, and, to add to potential future trouble, even OpenJ9 builds from AdoptOpenJDK don't come with those com.ibm.misc.* classes.

The upshot is that, if your Java code were to run on a JVM other than an fully-IBM-style one (whether intentionally or otherwise), you're liable to run into trouble with code that casually used those JVM-internal classes.

The Options

If you're running a version of Domino using Java 8 (9.0.1FP8+), which you'd darn well better be at this point, you're in luck: the built-in java.util.Base64 class has a nice API and is guaranteed to be present on every JVM. This is your best bet, since it doesn't require any other dependencies beyond the core JDK and it has some nice features like variant encoders for MIME- and URL-safe use.

If you're targeting a version of Domino that still uses Java 6 (don't), you do have one other safe-enough option that's available in both XPages and Java agents: javax.xml.bind.DatatypeConverter. This class is technically part of the JAX-B specification but is included in JVMs from version 6 through version 10. This is less of a clean choice than the java.util.Base64 class both because it's not as nice of an API and because use on Java 11+ will require bringing in an external dependency. Still, if you're stuck on an old release, it'll at least get you through any potential rough patches in the near future.

Finally, in XPages, you have the aforementioned com.ibm.commons.util.io.base64.Base64 class, which will continue to be present due to being included in a base library of the XPages stack, but which is rendered obsolete by java.util.Base64.

So I recommend taking some time to look through any running Java code you have for this potential hiccup and making the switch. It's admittedly a little awkward to do, but it's still a good idea.

Developing Open Liberty Features, Part 2

Aug 18, 2019, 9:14 AM

  1. Developing an Open/WebSphere Liberty UserRegistry with Tycho
  2. Developing Open Liberty Features, Part 2

In my earlier post, I went over the tack I took when developing a couple extension features for Open Liberty, and specifically the way I came at it with Tycho.

Shortly after I posted that, Alasdair Nottingham, the project lead for Open Liberty, dropped me a line to mention how programmatic service registration isn't preferred, and instead the idiomatic way is to use Declarative Services. I had encountered DS while fumbling my way through to getting these things working, but I had run into some bit of trouble or another, and I ended up settling on what I got working and not revisiting it.

Concepts

This was a perfect opportunity to go back and do things the right way, though, so I set out to do that this morning. In my initial reading up, I ran across a blog post from the always-helpful Vogella Blog that talks about coming at OSGi DS from essentially the same perspective I have: namely, having been used to Equinox and the Eclipse plugin/extension mechanism. As it turns out, when it comes to generic OSGi, Equinox can kind of poison your brain. The whole term "plug-in" instead of "bundle" comes from earlier Eclipse; "features", "update sites", and all of p2 are entirely Equinox-specific; and the "plugin.xml" extension mechanism is of a similar vintage. However, unlike some other vestiges that were tossed aside, "plugin.xml" is still in active use.

At its core, the Declarative Services system is generally similar to that route, in that you write classes to implement a given interface and then declare that your bundle provides that using XML files. The specifics are different - DS is more type-safe and it uses individual XML files in the "OSGI-INF" directory for each service - but the concept is similar. DS also has an annotation-based mechanism for this, which allows you to annotate your service classes directly and not worry about maintaining XML files. It's something of a compiler trick: the XML files still exist, but your tooling of choice (PDE, bnd, etc.) will generate the files based on your annotations. It took a bit for Eclipse to get on board with this, but, as of Neon, you can enable this processing in the preferences.

Implementation

Fortunately for me, my needs are simple enough that making the change was pretty straightforward. The first step was to delete the Activator class outright, as I won't need it for this. The second was to add an optional import for the org.osgi.service.component.annotations package in my Liberty extension bundle. I suspect that this is a bit of a PDE-ism: the annotations aren't even retained at runtime (and the package isn't present in the Liberty server), but this is the only mechanism Eclipse has to add a dependency for a plug-in project.

The annotation for the user registry was as straightforward as can be, needing a single line in this heavily-clipped version of the class:

1
2
3
@Component(service=UserRegistry.class, configurationPid="dominoUserRegistry")
public class DominoUserRegistry implements UserRegistry {
}

With that, Eclipse started generating the associated XML file for me, and the registry showed up at runtime just as it had before.

The TrustAssociationInterceptor was slightly more complicated because it had some extra initialization properties set, in particular the one to mark it as executing before normal SSO. This was a little tricky in two ways: Java annotations don't have any mechanism for specifying a literal Map for properties, and the before-SSO property is a boolean, but I could only write a string. It turned out that the property, uh, property on the annotation has a little mini-DSL where you can mark a property with its type. The result was:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@Component(
	service=TrustAssociationInterceptor.class,
	configurationPid=DominoTAI.CONFIG_PID,
	property={
		"invokeBeforeSSO:Boolean=true",
		"id=org.openntf.openliberty.wlp.userregistry.DominoTAI"
	}
)
public class DominoTAI implements TrustAssociationInterceptor {
}

Further Features

This is proving to be a pretty fun side project within a side project, and I think I'll take a crack at developing some more features when I have a chance. In particular, I'd like to try developing some API-contribution features so that they can be deployed to the server once and then used by web apps without having to package them (similar in concept to XPages Libraries). This is how Liberty implements its Jakarta EE specifications, and I could see making some extra ones. That's also exactly what CrossWorlds does, and so I imagine I'll crib a bunch of that work.

Developing an Open/WebSphere Liberty UserRegistry with Tycho

Aug 16, 2019, 3:08 PM

  1. Developing an Open/WebSphere Liberty UserRegistry with Tycho
  2. Developing Open Liberty Features, Part 2

In my last post, I put something of a stick in the ground and announced a multi-blog-post project to discuss the process of making an XPages app portable for the future. In true season-cliffhanger fashion, though, I'm not going to start that immediately, but instead have a one-off entry about something almost entirely unrelated.

Specifically, I'm going to talk about developing a custom UserRegistry and TrustAssociationInterceptor for Open Liberty/WebSphere Liberty. IBM provides documentation for this process, and it's alright enough, but I had to learn some specific things coming at it from a Domino perspective.

What These Services Are

Before I get in to the specifics, it's worth discussing what specifically these services are, especially TrustAssociationInterceptor with its ominous-sounding name.

A UserRegistry class is a mechanism to provide a Liberty server with authentication and user info services. Liberty has a couple of these built-in, and the prototypical ones are the basic and LDAP registries. Essentially, these do the job of the Directory and Directory Assistance on Domino.

A TrustAssociationInterceptor class is related. What it does is take an incoming HTTP request and look for any credentials it understands. If present, it tells Liberty that the request can be considered authenticated for a given user name. The classic mechanisms for this are HTTP Basic and form-cookie authentication, but this can also cover mechanisms like OAuth. In Domino, this maps to the built-in authentication mechanisms and, more particularly, to DSAPI filters.

How I Used Them

My desire to implement these developed when I was working on the Domino Open Liberty Runtime. I wanted to allow Liberty to use the containing Domino server as a user registry without having to enable LDAP and, as a stretch goal, I wanted to have some sort of implicit SSO without having to configure LTPA.

So I ended up devising something of an ad-hoc directory API exposed as a servlet on Domino, which Liberty could use to make the needed queries. To pair with that, I wrote a TrustAssociationInterceptor implementation that looks for Domino auth cookies in incoming requests, make a call to a small servlet with that cookie, and grabs the associated username. That provides only one-way SSO, but that's good enough for now.

The Easy Part

The good part was that my assumption that my comfort with Tycho going in would help was generally correct. Since the final output I wanted was a bundle, I was able to just add it to my project structure like any other, and work with it in Eclipse's PDE normally. Tycho and PDE didn't necessarily help much - I still had to track down the Liberty API plugins and make a local update site out of them, but that was old hat by this point.

What Made Development Weird

I went into the project in high spirits: the interfaces required weren't bad, and Liberty uses OSGi internally. I figured that, with my years of OSGi experience, this would be a piece of cake.

And, admittedly, it kind of was. The core concepts are the same: building with Tycho, bundle activators, MANIFEST.MF, and all that. However, Liberty's use of OSGi is, I believe, much more modern than Domino's, and certainly much less focused on Equinox specifically.

For one, though Liberty is indeed OSGi-based, it doesn't use Maven Tycho for its build process. Instead, it uses Gradle and the often-friendlier bnd tooling to handle its OSGi composition. That's not too huge of a difference, and the build process doesn't really affect the final built feature. The full differences are a whole big topic on their own, but the way they shake out for this purpose is essentially a difference in philosophy, and the different build mechanism was something of a herald of the downstream distinctions.

One big way this shows is in service registration. Coming from an Eclipse heritage, Equinox-based apps tend to use "plugin.xml" to register services, Liberty (and most others, I assume) favors programmatic registration of services inside the bundle activator. While this does indeed work on Equinox (including on Domino), this was the first time I'd encountered it, and it took some getting used to.

The other oddity was how you encapsulate your bundle as a feature in Liberty parlance. Liberty uses the term "feature" to refer to individual components that make up the server, and which you can configure in the "server.xml" file. These are declared using files similar to MANIFEST.MF with specialized headers to declare the name of the feature, the bundles that make it up, and any APIs it provides to the server and apps. In my case, I wrote a generic mechanism to deploy these features when a server is established, which writes the manifest files to the server's feature directory. Once they're deployed, they become available to the server as a feature with the "usr" prefix, like "usr:dominoUserRegistry-1.0" for my case.

In The Future

I have some ideas for additional features I'd like to develop - providing implicit APIs for Darwino and Jakarta NoSQL/JNoSQL would be handy, for example. This way went pretty smoothly, but I'll probably develop non-Domino ones using either Gradle or Maven with the maven-bundle-plugin. Either way, it ended up fairly pleasant once I discarded my old assumptions, and it's another good entry in the "pros" column for Liberty.

Java Grab Bag 2

May 3, 2019, 3:38 PM

Tags: java
  1. Java Hiccups
  2. Bitwise Operators
  3. Java Grab Bag 2
  4. Java Travelogue: The Care and Feeding of Locales
  5. More Notes on Filesystem and Charset Portability

Following in the vein of "Java Hiccups", I've had a couple things floating around my head lately that I think collectively make for a good post for Java developers, particularly those working in the Domino arena.

Without further ado:

Map#computeIfAbsent

This is a method that was added in Java 8 and, while it's not as big a deal as the addition of streams, it's one of my favorite additions and something I use very frequently. To give a point of reference, consider this common idiom from an imagined XPages app:

Map<String, Object> applicationScope = ExtLibUtil.getApplicationScope();
if(!applicationScope.containsKey("someVal")) {
  applicationScope.put("someVal", someExpensiveOperation());
}
String someVal = (String)applicationScope.get("someVal");

Essentially, using a Map as a cache for a complex computed value. Java 8 added the #computeIfAbsent method (alongside several similar ones) to do this in one go:

String someVal = (String)ExtLibUtil.getApplicationScope().computeIfAbsent("someVal", key -> someExpensiveOperation());

The second parameter is (usually) a lambda, like the ones used in streams, that takes the provided key as an argument and is only executed if the value does not already exist. Due to the way this was added, most implementations do pretty much the same thing as the first block of code, but you don't have to care about that. Your code gets a bit smaller, the intent is much clearer, and it's less prone to small bugs like changing the key and forgetting to change it in all three places.

Arrays Are Weird

Java's built-in array type is loosely based on C's, and that's reflected in the syntax:

int[] foo = new int[4];
foo[0] = 1;
foo[1] = 2;
foo[2] = 3;
foo[3] = 4;

Like C, they are zero-based, declared with the capacity and not the max index, and cannot be resized. Unlike C, arrays aren't just syntactical sugar on top of pointers, and this manifests immediately in bounds checking. Take this line:

foo[4] = 10;

In C, this will (famously) just write an integer 10 value into whatever memory happens to be just beyond the bounds of your array. In Java, you'll get an ArrayIndexOutOfBoundsException, saving you from the insidious bug. But, since Java arrays are (probably) implemented internally very similar to in C - they're likely contiguous blocks of memory sized to the type - they're still extremely efficient, and so they show up in a lot of speed-critical code.

As speedy and safe as they are, though, they're still pretty unfriendly. For starters, they can't be resized. When you do new int[4] (or use literal syntax like new int[] { 1, 2, 3, 4 }), you carve out that much memory and can't shrink or expand it in-place. You can change the values inside an array, just not its size. To "resize" efficiently, you have to make a new array and then use System.arraycopy to populate the new array with the contents of the old.

This is all why the List interface (with its predecessor class Vector) exists: they serve the same function of "ordered collection of stuff", but allow for dynamic resizing. Because these objects are usually "efficient enough" (ArrayList uses "true" arrays under the covers) while having numerous additional benefits, you should use them as your first go-to and only use arrays if you have a reason.

That's in part because the weirdness of arrays doesn't end with their inconvenience. Array types are actually implicitly-created classes, even when they contain primitive types. So:

int foo = 3; // primitive value
foo = null; // syntax error!
int[] bar = new int[] { 3 }; // Object
bar = null; // legal!

List<int> fooList = new ArrayList<>(); // syntax error - primitives can't be in generics List<int[]> barList = new ArrayList<>(); // legal!

Object.class == Object[].class; // false int[].class.isArray(); // not only legal, but true

Java provides two main utility classes for working with arrays: java.util.Arrays (extremely useful for Arrays.asList) and java.lang.reflect.Array (usually only useful in edge cases).

Java Has No Library Versioning System

If you've worked with Java in Designer or Eclipse, you've likely run across this preferences pane or its per-project version:

Eclipse Java compiler settings

These settings affect two things:

  • The syntax allowed in your source (e.g. new ArrayList<>() requires 1.7 or higher)
  • The class file format version (you can think of this like an NSF ODS). You've likely seen the latter in play by receiving an UnsupportedClassVersionError trying to run Java 7 or 8 code on a pre-9.0.1FP8 Domino server.

Conspicuously absent from this short list is anything to do with classes or methods added to the runtime in newer versions. For example, the String class gained a static method String.join to conveniently concatenate strings with a given delimiter. If you're targeting an older Java version but using a newer Java library (as Designer 9.0.1FP10 does with a default target of 1.5 and JVM of 1.8), you can write a line of code using that method without issue - the syntax doesn't require anything above 1.5, so all is clear as far as the compiler is concerned. But if you then try to run that code on an older JVM (such as an older Domino server), you'll get an exception at runtime, since the method doesn't exist.

Unfortunately, the only true answer to this is to tell your IDE about a JRE for each specific Java version you're targeting, something that doesn't happen by default, and which will gradually get more difficult as Java 6 becomes harder and harder to come across.

This is one of the things that OSGi aims to fix - you could, for example, have many versions of Guava installed, and you could declare that your plugin works specifically with version 18. Then, when loading, the runtime will either bind to a matching version or give you an error that no version could be resolved. No mystery involved. Unfortunately, OSGi is a niche thing losing ground, and the module system introduced in Java 9 consciously does not address this.

In a pinch, you can use the file tool on most Unix systems to check the version of an individual class file:

$ file Foo.class
Foo.class: compiled Java class data, version 52.0 (Java 1.8)

Licensing

A little while ago, Oracle raised a bit of a stink by declaring that, as of this year, commercial use of their Java runtimes would require paid licensing. Historically, you could get support for Java for money, and certain additional components had their own licensing requirements, but it was pretty normal otherwise to install Java from java.sun.com and not give it a second thought.

This naturally caused a few questions when it comes to Domino and other Java-incorporating IBM products, and IBM released a statement that basically amounted to "you don't have to worry about it". IBM has maintained their own variant of the JVM (called J9 for Smalltalk-related reasons, not to be confused with Java 9) and anyway has always had arrangements with Sun/Oracle such that IBM's customers don't have to worry about dealing with Oracle directly.

But what about using Java outside of a licensed product, such as if you just want to run Tomcat on some server? The short answer there is that you're still fine, but you just have to know a little about the difference between "a JDK" and "Oracle's JDK". Java and the surrounding JDK have been progressively open-sourced in fits and spurts over the years, and are now at a point where the project called OpenJDK is basically the real Java environment, and then Oracle's JDK is just one implementation of it. It's similar to Linux: the core parts are open-source, and many distributions are entirely free, but there also exist commercial variants for pay.

So, if you want to run a Java stack, you can do so without putting forth a single cent by using an OpenJDK build. Oracle, IBM, and others will still be happy to take your money if you want a commercially-supported Java environment, of course.

For a longer explanation, this blog post from @javachampions is pretty much the definitive word.

 

Java With Domino After XPages

Mar 14, 2019, 3:10 PM

IBM and HCL held a webcast today to detail some plans for Notes/Domino V11. There were some interesting tidbits elaborating on things like the pub/sub support, and it'll be worth tracking down a recording of the event when it's available.

What's important for this series, though, is that this event served as the long-promised "roadmap" announcement for XPages. The roadmap is, in effect, option three: HCL plans to look into ways to reuse some existing XPages code, but in general you should be aiming to write your UIs in something else, either consuming REST services from an XPages container or accessing Domino data via another route (like the domino-db Node.js module and hypothetical Java gRPC client).

So we know the end of the path: not XPages. However, it's not like we're all just going to throw away our existing apps, so there's work to do determining how we're going to get there. The options remain pretty much what they were after CollabSphere last year, albeit now with the doubt removed. The first two options - returning to LotusScript or going to Node - have their advantages and disadvantages, and you could make a reasonable case for either. Personally, I'm not interested in going down those roads, though, and I think it's better for any app of reasonable complexity to dive into Java. Other members of the community and I have developed tools over the years to make it easier, and now's the time to take some of these steps if you haven't already.

Do Not Use Server JavaScript

Sever JavaScript was always something of a trap for app architecture. There's nothing inherently wrong with having a scripting language on your UI pages, and it certainly helped bridge some gaps, but the way it and Designer intertwined encouraged developers to create non-portable messes. If you're still writing SSJS, stop immediately.

Learn Proper Java

Java has been around for a long time, and the way to right "good" Java code has changed over time and varies greatly by your environment. Some aspects, though, apply generally, and it's useful to stay up-to-date on current practices. I don't know a better resource for this than Effective Java, which has been updated for Java 7-9 since I last read it.

Speaking of which, you should learn about Java 8 streams and lambdas - they're great. Julian Robichaux did a presentation on this topic back at Connect 2017, and the slide deck is very elucidating.

Adopt Standard Java Technologies

Last year, I created a project to bring some modern JEE technologies to XPages. These are some of the same technologies I've been talking about in my "XPages to Java EE" series and, while that project can't bring the full JEE development experience to XPages, using those tools will help you write code that, in some cases, could be directly dropped into a Java EE app with no modifications at all. There's a big asterisk when it comes to actually accessing Domino data, but that's a solvable problem as well (with some more development).

In particular, you should start writing JAX-RS services. Not only is JAX-RS an excellent and very-capable spec, but REST services are portable to absolutely any front end.

Adopt Automated Builds

Maven has been something of a bugaboo for XPages developers for a while, but doesn't have to be. Node development (server- or client-side) revolves around npm and various build plugins, and Maven is much the same thing. One of the biggest improvements I've made lately to all of my active XPages apps is to wrap the on-disk project for them inside a Maven artifact, using the NSF ODP Tooling. That project allows you to automatically build your NSFs alongside other parts of the project (such as OSGi plugins) without having Designer involved.

Check the example project in that repo, and stay tuned for a 2.0 release (probably) imminently.

Learn Other Toolkits

If you're just starting the process of figuring out what to do after XPages, it doesn't particularly matter which other toolkit you learn, as long as it's reasonably modern. If you take some time to learn how to make, say, a React app but end up going with something else down the line, the lessons you learn will apply very closely. A particularly-comfortable option could be to learn JSF, which has a common ancestry with XPages but has up-to-date capabilities.

Whatever it is, though, just learn some other toolkit.

Follow Channels and Accounts for Other Tech

Over the last couple months, I've started following a lot of Jakarta-related blogs and Twitter luminaries. This applies elsewhere - even if you're not using other toolkits yet, it's very helpful to start immersing yourself in the news and culture.

Don't Stay Still

The primary thing to take to heart is the importance of doing something. Unless you're planning to change careers or retire in the short term, you'll have to make one decision or another. XPages is not going to get meaningfully better, and even existing apps will get worse with time as browsers and technology change.

Other environments, though, are already leagues ahead and are constantly improving. Dive in; the water's fine!

Java Hiccups

Nov 7, 2018, 2:01 PM

Tags: java
  1. Java Hiccups
  2. Bitwise Operators
  3. Java Grab Bag 2
  4. Java Travelogue: The Care and Feeding of Locales
  5. More Notes on Filesystem and Charset Portability

To take a break from the doom-and-gloom of my last post, I figured it'd be good to dust off a post idea I've had in my drafts for a while: common hiccups that Java developers - particularly those coming from a Domino background - run into. This is sort of a grab bag of non-obvious concepts that are easy to assume incorrectly about, whether because of the way other languages work or the behavior of the lotus.domino API specifically.

So, roughly in order of complexity:

import Is Just For Cleanliness

In many languages, in particular C/C /Objective-C, the natural equivalent of Java's import statement has a massive effect, physically grafting files into your source. In Java, though, import is really just for developer convenience. At runtime, there's no difference between having written this:

import java.util.ArrayList;
import java.util.List;

/* snip */
List<String> foo = new ArrayList<>();

...or this:

java.util.List<java.lang.String> foo = new java.util.ArrayList<>();

If you import a class but never use it in a file, it won't have any effect on the runtime behavior of the class. It's just used by the compiler to clarify what you mean when you use a base class name without reference.

Incidentally, as seen here, classes within the java.lang package (but not subpackages) are auto-imported, so it's as if each Java file has an invisible import java.lang.* at the top.

Compilation Doesn't Bake In Libraries

This is related, and is also an area where Java differs from some other environments. With C et al, you have the option to statically link referenced external libraries - which is to say, grab their contents at build time and put them into your compiled result such that they may as well be part of your program. Java doesn't do this: every time you reference a class or method, it's really just storing the equivalent of the string name of that class, which is then resolved at runtime.

This is why it's very easy to run into a ClassNotFoundException: you can compile code with some classes present on the class path, but then run it in a system where they're not present. The Java runtime doesn't pre-check whether all the required classes are available when it starts running, so you only find out when it hits that line of code.

Different Java-based environments deal with this in different ways. Standard (non-Domino) web apps deal with this by including dependency jars inside the WEB-INF/lib folder during the packaging phase of building. OSGi, XPages's framework of choice, has a whole dependency mechanism where you can specify bundles or packages with version ranges, in the hope of bringing some order to the chaos, with mixed success.

Primitives Are A Thing

Though the term "primitive" means a built-in data type generally, here I'm specifically talking about things like int and double. For historical and performance reasons, Java has a conceptual and practical distinction between the objects that you deal with most of the time and the primitive types used mostly for number storage. Namely: byte, char, short, int, long, float, double, and boolean. Unlike object references, there is no concept of a "null" value with these that would cause a NullPointerException. Referring to a variable with one of these types will always contain some value if the code compiles, even if it's just the 0/false default for an object property when not otherwise initialized.

Each of these types has a corresponding "boxed" object version, generally with the un-abbreviated name capitalized, such as Byte and Integer.

The distinction used to be harsher than it is today, thanks to autoboxing. Autoboxing is a compile-time behavior that will automatically convert between the primitive types and their object holders as necessary, allowing this type of code, which would otherwise be illegal:

Object i = 3;
int j = new Integer(4);

Autoboxing is mildly inefficient, so it's good to know that it exists, but you don't normally need to lose sleep over it.

All Object Variables Are Pointers

In a language like C , there is a distinction between a variable that "is" an object vs. one that is a pointer to an object somewhere in memory. In Java, however, the former doesn't exist: an object variable is only ever a pointer. This has a couple implications. For one, this code only deals one object:

SomeClass foo = new SomeClass();
SomeClass bar = foo;
bar.setName("hi");
foo.setName("hello");
bar.getName(); // Will be "hello"

This is also why Java is picky about not referencing object variables until they've been initialized to at least something, so this generates a compile-time error:

Object foo;
foo.getClass();

Unfortunately, unlike some languages, Java has no language-level support for enforcing the distinction between a null object reference and a non-null one, which is why NullPointerExceptions are so prevalent.

Another implication of this leads into its own hiccup common to LotusScript programmers:

Strictly Speaking, All Method Arguments Are "By Value", But...

All method parameters in Java are "by value" in the LotusScript sense, the fact that all object variables are pointers means that the "value" you're passing to the method for an object parameter is always a reference. Java has no mechanism to pass a reference to a primitive type, nor does it have a mechanism to implicitly duplicate an object when passing it to a method.

Not only is this a bit conceptually confusing at first, but it's also a potential trap for bad programming practices. It's very easy to write a method that performs modifications on objects passed in as parameters, and this is often the right thing to do. However, since the language doesn't have any syntax mechanism for broadcasting this behavior, it's up to you as the programmer to either write the method name in such a way that it's obvious what's going to happen or clearly state it in the documentation if it's something that's going to be used outside the current file.

Casting Objects Doesn't Do Anything

By "casting", I'm referring to something like this:

RichTextItem body = (RichTextItem)doc.getFirstItem("SomeItem");

The (RichTextItem) is a cast, and it's yet another area that diverges from some other languages. What casting an object in Java means is that you're going to refer to an object by a different class or interface than the one it's been previously referred to as. It has some runtime implications, but the thing to keep in mind is that it's about choosing the name for something that exists as opposed to changing an object into a different class.

So, for example, the RichTextItem idiom above exists because the Document#getFirstItem method returns an object that's called a Item, but, when the item in the document is rich text, it will actually return a RichTextItem object. RichTextItem is a subclass of Item, and so it's legal to refer to such an object either as RichTextItem, Item, Base (the common interface for all Notes objects), or Object (the common superclass of all objects). In a situation like this, you have to cast the object because you're going to refer to it as a more-specific type of object than the one the method says it returns.

If you do this and the object is not actually of the type you're trying to cast it to (in this case, if it's a plain text item or MIME, most commonly), you'll end up with a ClassCastException, because the cast is enforced at runtime. But, success or fail, the cast will not actually affect the object itself in any way - it will continue on being whatever it was already, regardless of name.

Casting Primitives Does Do Something

For better or for worse, performing a cast on a primitive type does have the possibility of creating a new value. For example:

int foo = Integer.MAX_VALUE;
short bar = (short)foo;

Because int can hold more data than a short, this case creates a new value based on chopping off the highest-value bits of the internal binary representation of foo. (As a side note, because of the fun way computers deal with numbers, foo is 2147483647, while bar is -1.)

Normally, this behavior doesn't matter too much, since, if you have a method that takes, say, an int and you have a long, you can safely cast it down since it'll likely be a tiny value anyway. It's important to know that it can happen, though, and this behavior is very important when, for example, native C libraries that use unsigned values, which do not exist in Java as such.

Java Has Only A Limited Concept of Immutability

"Immutability" refers to the inability to change the value of an entity once it's created. It's come to the fore as a concept recently because working with immutable objects sidesteps a lot of issues with asynchronous programming. Java, unfortunately, doesn't really have any language-level support for immutable objects in the sense that, for example, Swift does.

Java throws a bit of a curve ball in this area with the final keyword, which means that a variable can't be reassigned after being first initialized. This means that you can't do things like this:

final int foo = 3;
foo = 4; // compiler error

final SomeClass bar = new SomeClass();
bar = new SomeClass(); // compiler error

This, on the other hand, is entirely legal:

final SomeClass foo = new SomeClass();
foo.setName("hi");
foo.setName("hello");

This is because the only thing blocked from changing here is the value of foo-the-reference, but the object it's referencing can be changed at will.

An object can be made effectively immutable, though, by means of making its outward-facing methods not change any of the internal state. This is used commonly for "value" classes, such as the aforementioned Integer. Though the language doesn't do anything to guarantee that the Integer class doesn't allow mutation, the class is written in such a way that it has no inlet for it.

Because of the value of immutable objects, they're used commonly in the core Java classes and in third-party libraries, particularly newer ones. However, since the language can't tell you if an object is immutable, you have to be on the lookout for whether a given method modifies the existing object in-place or returns a new object reflecting the change. This comes up frequently with Strings, which are immutable in Java. This is something I've seen commonly:

String foo = " hello ";
foo.trim();
System.out.println(foo);

That code will print " hello ", with the leading and trailing spaces (though without the quotes). This is because the String#trim method, like all "changing" methods on String, leaves the original value intact but returns a new String object reflecting the expected value.

This is just something you have to be on the lookout for, especially since this pattern isn't even consistently applied within the core Java classes. The Date class, for example, is infamously bad in a lot of ways, and one of those ways is that it has mutation methods.

Generics In Java Are Weird

A "generic" refers in this case to a class that is declared as being associated with one or more other types that can be defined after the fact. The prototypical example of this is a collection class, like List<String> foo. In this case, the List interface is generic and lets you specify the type of object you expect to find within it, in this case String.

Generics, unfortunately, were added after Java's initial release, and they bear the marks of it. Unlike languages like C , Java generics are largely syntactic sugar, meant to replace things like:

String someString = (String)aListIKnowHasStrings.get(0);

...with this:

String someString = aListDeclaredWithStrings.get(0);

However, under the covers, a List only ever really knows it contains Objects, and the second form just transparently shims in a (String) cast at runtime. That's why you can do something like this:

((List<Object>)(List<?>)aListDeclaredWithStrings).add(new NotAStringObject());

That line will not only compile, but it will execute without issue at runtime. It's only later, when you try to extract the value to a String variable, that you'll hit a ClassCastException.

Some generic information is retained at runtime, depending on how it's used, but for the most part it's best to think of it as just a syntax nicety. This behavior is endless trouble, but something we have to live with.

Garbage Collection Is Automatic, But Resource Management Isn't Necessarily

This is one of the main things that bites Domino developers as they learn about Java. One of the early things they learn when switching from LotusScript to Java is that now you have to worry about the .recycle() method on your objects, or else you'll have trouble. This leads to two misapprehensions: that Java in general requires "recycling" for every object, and that recycling with Domino objects is about memory in the same way that a Java OutOfMemoryError is.

Unfortunately, the reason that recycle() exists at all requires delving into some nitty-gritty aspects of the Java environment, but I first want to reinforce that Java uses automatic garbage collection at all times to watch for and delete objects that are no longer used. That "no longer used" bit glosses over a bit, but take this as an example:

public void foo() {
  String a = "hello";
  String b = " there";
  return a   b;
}
public void bar() {
  String message = foo();
  System.out.println(message);
}

There are three objects in action here, but, by the time the code reaches the System.out.println line, a and b are no longer used and will be slated for automatic garbage collection. You as a programmer do not need to worry about them.

The lotus.domino objects, though, are trouble. I think it's best to not think of recycle() in terms of "memory" but instead think of the objects as "open resources", in the same way that you might open a network connection or a stream to a file on the filesystem. Unlike object memory, network resources are not necessarily automatically closed by Java - there are some affordances with syntax and the concept of "finalizers", but, in general, the responsibility for closing a resource lies with the programmer.

There are a few grab-bag notes to do with these objects:

  • Different lotus.domino objects refer to different kind of backing resources, which is why problems will sometimes manifest as complaints about memory (when they refer primarily to C-side structures in Domino's native memory, separate from Java) and sometimes as complaints about handles (database and document references, generally)
  • Recycling a "parent" object recycles all of its children, but the relationship is not always clear. Importantly, DateTime objects are children of the ancestor Session, even when you retrieve them from a Document, and so they can linger for a long time
  • Agents and XPages both mitigate and conceal the need for recycling by automatically closing the auto-generated Session(s) at the end of the agent execution or page request. In practice, you only really need to worry about recycling if you're, for example, looping over a large view

I may make another one of these posts in the future, and hopefully this goes a little way to clearing up some common misconceptions.

AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10

Oct 19, 2018, 11:48 AM

  1. AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10
  2. Domino 11's Java Switch Fallout
  3. fontconfig, Java, and Domino 11
  4. Notes/Domino 12.0.2 Fallout

Since 9.0.1 FP 10, and including V10 because it's largely identical for this purpose, I've encountered and seen others encountering a couple strange problems with compiling XPages projects. This is a perfect opporunity for me to spin a tale about the undergirding frameworks, but I'll start out with the immediate symptoms and their fixes.

The Symptoms

There are three broad categories of problems I've seen:

  • "AbstractCompiledPage cannot be resolved to a type"
  • Missing third-party XPages libraries, such as ODA, resulting in messages like "The import org.openntf cannot be resolved"
  • Complaints about MANIFEST.MF, like "MANIFEST.MF has no main section" and others

The first two are usually directly related and have the same fix, while the second can also be caused by some other sources, and the last one is entirely distinct.

Fix #1: The Target Platform

For the first two are based on problems in the active Target Platform, namely one or both of the standard platform components go missing. The upshot is that you want your Target Platform preferences to look something like this:

Working Target Platform

There should be a selected platform (the name doesn't matter, but "Running Platform" is the default name) with entries at least for ${eclipse_home} and for a directory inside your Notes data dir, here C:\Notes\Data\workspace\applications\eclipse. If they're missing, modify an existing platform or create a new one and add an "Installation"-type entry for ${eclipse_home} and a "Directory"-type one for the eclipse directory within your data dir.

Fix #2: Broken Plugins, Particularly ODA

Though V10 didn't change much when it comes to XPages, there are a few small differences. One in particular bit ODA: we had a dependency on the com.ibm.domino.commons plugin, which was in the standard Notes environment previously but is not as of V10 (though it's still present on the server). We fixed that one in the V10 release, and so you should update your ODA version if you hit this trouble. I don't think I've seen other plugins with this issue in the V10 transition, but it's a possibility if Fix #1 doesn't do it.

Fix #3: MANIFEST.MF

This one barely qualifies as a "fix", but it worked for me: if you see Designer complaining about MANIFEST.MF, you can usually beat it into submission by cleaning/rebuilding the project in question. The trouble is that Designer is, for some reason, skipping a step of the XPages compilation process, and cleaning usually kicks it into gear.

I've also seen others have success by deleting the error entry in the Errors view (which is actually a thing that you can do) and never seeing it again. I suspect that the real fix here is the same as above: during the next build, Designer creates the file properly and it goes away on its own.

The Causes

So what are the sources of these problems? The root reason is that Designer is a sprawling mountain of code, built on ancient frameworks and maintained by a diminished development team, but the immediate causes have to do with OSGi.

The first type of trouble - the target platform - most likely has to do with a change in the way Eclipse manages target platforms (look at the same prefs screen in 9.0.1 stock and you'll see it's quite different), and I suspect that there's a bug in the code that migrates between the two formats, possibly due to the dramatic age difference in the underlying Eclipse versions.

The second type of trouble - the MANIFEST.MF - is due to a behind-the-scenes switch in how Designer (and maybe the server) handles dependencies in XPages projects.

Target Platforms

The mechanism that OSGi projects - such as XPages applications - use for determining their dependencies at build time is the notion of a "Target Platform". The "target" refers to the notion that this is the platform that is expected to be available at runtime for what you're building - loosely equivalent to a basic Java classpath. An OSGi project is checked against this Target Platform to determine which classes are available based on their bundle names and versions.

This is distinct from the related concept of a "Running Platform". Designer, being based on Eclipse, is itself built on and runs using OSGi. Internally, it uses the same mechanisms that an XPages application does to determine what plugins it knows about and what services those plugins provide.

This distinction has historically been hidden from XPages developers due to the way the default Target Platform is set up, pointing at the same Running Platform it's using. So Designer itself has the core XPages plugins running, and it also exposes them to XPages applications as the Target. Similarly, the way we install XPages Libraries like ODA is to install them outright into the Designer Running Platform. This allows Designer to know about the library service provided, which it uses to populate the list of available plugins in the Xsp Properties editors.

However, as our trouble demonstrates, they're not inherently the same thing. In standalone OSGi development in Eclipse, it's often useful to have a Target Platform distinct from the Running Platform - such as the XPages environment for plugins - to ensure that you only depend on plugins that will be available at runtime. But when the two diverge in Designer, you end up with situations like this, where Designer-the-application knows about the XPages runtime and plugins and constructs an XPages project and translates XSP to Java using them, but then the compilation process with its empty Target Platform has no idea how to actually compile the generated code.

MANIFEST.MF

I've mentioned that an OSGi project "determines its dependencies" out of the Target Platform, but didn't mention the way it does that. The specific mechanism has changed over time (which is the source of our trouble), but the idea is that, in addition to the Java classes and resources, an OSGi bundle (or plugin) has a file that declares the names of the plugins it needs, including potentially a version range. So, for example, a plugin might say "I need org.apache.httpcomponents.httpclient at least version 4.5, but not 5.0 or higher". The compiler uses the Target Platform to find a matching plugin to compile the code, and the runtime environment (Domino in our case) does the same with its Running Platform when loading.

(Side note: you can also specify Java packages to include from any plugin instead of specific plugin names, but Designer does not do that, so it's not important for this purpose.)

(Other side note: this distinction comes, I believe, from Eclipse's switch from its own mechanism to OSGi in its 3.0 release, but I use "OSGi" to cover the general concept here.)

The old way to do this was in a file called "plugin.xml". If you look inside any XPages application in Package Explorer, you'll see this file and the contents will look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>
<plugin class="plugin.Activator"
  id="Galatea2dVCC_2fIKSG_dev5csyncagent_nsf" name="Domino Designer"
  provider="TODO" version="1.0.0">
  <requires>
    <!--AUTOGEN-START-BUILDER: Automatically generated by null. Do not modify.-->
    <import plugin="org.eclipse.ui"/>
    <import plugin="org.eclipse.core.runtime"/>
    <import optional="true" plugin="com.ibm.commons"/>
    <import optional="true" plugin="com.ibm.commons.xml"/>
    <import optional="true" plugin="com.ibm.commons.vfs"/>
    <import optional="true" plugin="com.ibm.jscript"/>
    <import optional="true" plugin="com.ibm.designer.runtime.directory"/>
    <import optional="true" plugin="com.ibm.designer.runtime"/>
    <import optional="true" plugin="com.ibm.xsp.core"/>
    <import optional="true" plugin="com.ibm.xsp.extsn"/>
    <import optional="true" plugin="com.ibm.xsp.designer"/>
    <import optional="true" plugin="com.ibm.xsp.domino"/>
    <import optional="true" plugin="com.ibm.notes.java.api"/>
    <import optional="true" plugin="com.ibm.xsp.rcp"/>
    <import optional="true" plugin="org.openntf.domino.xsp"/>
    <!--AUTOGEN-END-BUILDER: End of automatically generated section-->
  </requires>
</plugin>

You can see it here declaring a name for the pseudo-plugin that "is" the XPages application (oddly, "Domino Designer"), a couple other metadata bits, and, most importantly, the list of required plugins. This is the list that Designer historically (and maybe still; it's not clear) uses to populate the "Plug-in Dependencies" section in the Package Explorer view. It trawls through the Target Platform, finds a matching version of each of the named plugins (the latest version, since these have no specified ranges), adds it to the list, and recursively does the same for any re-exported dependencies of those plugins. "Re-exported" isn't exposed here as a concept, but it is a distinction in normal OSGi plugins.

Designer derives its starting points here from implicit required libraries in XPages applications (all those "org.eclipse" and "com.ibm" ones above) as well as through the special mechanism of XspLibrary extension contributions from plugins installed in the Running Platform. This is why a plugin like ODA has to be installed in Designer itself: in the runtime, it asks its plugins if they have any XspLibrary classes and uses those to determine the third-party plugin to load. Here, ODA declares that its library needs org.openntf.domino.xsp, so Designer adds that and its re-exported dependencies to the Plug-in Dependencies group.

With its switch to OSGi in the 3.x series circa 2005, most of the functionality of plugin.xml moved to a file called "META-INF/MANIFEST.MF". This starkly-named file is a standard part of Java, and OSGi extends it to include bundle/plugin metadata and dependency declarations. As of 9.0.1 FP10, Designer also generates one of these (or is supposed to) when assembling the XPages project. For the same project, it looks like this:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Domino Designer
Bundle-SymbolicName: Galatea2dVCC_2fIKSG_dev5csyncagent_nsf;singleton:=true
Bundle-Version: 1.0.0
Bundle-Vendor: TODO
Require-Bundle: org.eclipse.ui,
  org.eclipse.core.runtime,
  com.ibm.commons,
  com.ibm.commons.xml,
  com.ibm.commons.vfs,
  com.ibm.jscript,
  com.ibm.designer.runtime.directory,
  com.ibm.designer.runtime,
  com.ibm.xsp.core,
  com.ibm.xsp.extsn,
  com.ibm.xsp.designer,
  com.ibm.xsp.domino,
  com.ibm.notes.java.api,
  com.ibm.xsp.rcp,
  org.openntf.domino.xsp
Eclipse-LazyStart: false

You can see much of the same information (though oddly not the Activator class) here, switched to the new format. This matches what you'll work with in normal OSGi plugins. For Eclipse/Equinox-targeted plugins, like XPages libraries, plugin.xml still exists, but it's reduced to just declaring extension points and contributions, and no longer includes dependency or name information.

Eclipse had moved to full OSGi by the time of Designer's pre-FP10 basis (2008's 3.4 Ganymede), but XPages's history goes back further, so I guess that the old-style Eclipse plugin.xml route is a relic of that. For a good while, Eclipse worked with the older-style plugins without batting an eye. FP10 brought a move to 2016's Eclipse 4.6 Neon, though, and I'm guessing that Eclipse dropped the backwards compatibility somewhere in the intervening eight years, so the XPages build process had to be adapted to generate both the older plugin.xml files for backwards compatibility as well as the newer MANIFEST.MF files.

I can't tell what the cause is, but, sometimes, Designer fails to populate the contents of this file. It might have something to do with the order of the builders in the internal Eclipse project file or some inner exception that manifests as an incomplete build. Regardless, doing a project clean and usually jogs Designer into doing its job.

Conclusion

The mix of layering a virtual Eclipse project over an NSF, the intricacies of OSGi, and IBM's general desire to insulate XPages developers from the black magic behind the scenes leads to any number of opportunities for bugs like these to crop up. Honestly, it's impressive that the whole things holds together as well as it does. Even though it doesn't seem like it to look at the user-visible changes, the framework changes in FP10 were massive, and it's not at all surprising that things like this would crop up. It's just a little unfortunate that the fixes are in no way obvious unless you've been stewing in this stuff for years.

App Dev After CollabSphere 2018

Jul 29, 2018, 10:48 AM

In recent years, MWLUG/CollabSphere has tended to be a good time to get a lay of the land for what IBM - and now HCL - intends for their app dev strategy. Recent Connects weren’t too heavy on announcements of major import for Domino developers, and any news to come out tends to do so in the months leading up to summer.

This year, we’ve had time to digest the implications of the HCL transfer, get a feel for how they intend to handle the product, and generally get a good bead on their app-dev vision. What they’ve said so far this year is clear: LotusScript for old apps on mobile platforms and Node.js for new development (or new developers). As far as XPages, I believe that the most time that it got at the conference was in my session, which was about what to do after XPages.

LotusScript

Though I’ve certainly not hidden how painful the prospect of enhancements to LotusScript is to me, I have to admit that adding a few capabilities for REST data service access makes strategic sense for the platform. Though XPages made a significant mark on Domino app dev, it never pushed aside the classic style, and every move that IBM made for app modernization since then seemed to exist exclusively in the span of the sentence announcing it.

So HCL announced early this year that they planned to port the classic Notes client first to iOS and then later to Android and WebGL+WebAssembly. Adding any kind of Java to this plan - XPages, LS2J, etc. - would present some technical hurdles, and so it makes workload sense to focus on the languages that have runtimes in the C core.

Apps run this way won’t be good, but there’s some logic to the tack of targeting customers for whom “modernization” only really means “we want our same old apps to run offline on new OSes”. Their plan to run on phones also necessitates some more-dramatic changes to the tooling, so it’s possible that they have larger changes in mind - or at least we’ll see a return of the “hide on mobile” checkboxes in Designer.

Node.js

The big HCL push for Node.js seems to me to be a way to get a lot of bang for the buck: by positioning it as the new way to write apps, they’re both (potentially) making Domino more appealing to those not already on the platform and guiding existing developers to a platform for which IBM and HCL are not responsible. Though the domino-db driver is no small technical feat - and it looks like they’ve done a good job making it both fast and native-feeling in Node - it’s a much, much smaller footprint than XPages, which put IBM on the hook for maintaining an entire app-dev stack and UI toolkit with limited outside assistance.

I do think that it’s smart to write a Node.js DB driver - even if it doesn’t bring in an influx of new blood, it provides a legitimate app-dev story and Node is a top-notch platform. The gRPC stack also provides an entryway for future hooks and development without the assumptions of NRPC.

Java

Java development on Domino is in a weird place. Domino 10 doesn’t have anything directly for XPages/OSGi developers, though we’ll get access to DGQF via the Database class. I’ve heard whispers that they’re starting to plan more for Domino 11, but that’s largely conjecture at this point. Certainly, HCL has made it clear that their heart isn’t in it, and honestly I get why. Since XPages has been in essentially maintenance mode since 9.0.1 or earlier, it’s aged itself out of contention for modern app dev. It wouldn’t be impossible to drag it forward to something respectable, but then they’d still have another development environment exclusive to Domino to maintain.

I’m not sure what the best thing to do with the stack is. Though XPages didn’t bring all Domino developers to it, it did bring a significant chunk, and a lot of people have spent upwards of a decade of their life with the toolkit. For my part, I think it makes a lot of sense to move to “normal” Java/Jakarta EE development, which provides the possibility of salvaging Java-side code, though it leaves XSP and SSJS in the lurch. It’s hard to make a good financial case for either significantly upgrading the platform or at least undoing the tight coupling with the Domino server that it accrued over the years, though I’ll admit it’s sort of fun to think about.

Reforming the Blog in Darwino, Part 4

Jul 20, 2018, 6:59 PM

Tags: java darwino
  1. Reforming the Blog in Darwino, Part 1
  2. Cramming Rails Into A Maven Tree
  3. Reforming the Blog in Darwino, Part 2
  4. Reforming the Blog in Darwino, Part 3
  5. Reforming the Blog in Darwino, Part 4

Last time, I went over my switch in tack for how I'm making the new version of my blog, and my overall focus on picking an interesting stack of JEE technologies. In this post, I'm going to start diving into the implementation of the UI, though I think that it will make sense to dedicate two posts to it.

The biggest decision I made with the UI side of this app is that I didn't want to make a client-side JS app. There's a reason they're so ascendant, and I find development with React or Stencil pretty enjoyable, but I wanted to go a different route here for a few reasons:

  • For a blog, a CSJS app is wildly overkill, and, in fact, would require extra work to fulfull one of the basic requirements of a blog, which is being web-crawler friendly.
  • I want to see how svelte I can make the client payload.
  • Skipping a JS framework (and a CSS one) is a great way to brush up on what plain HTML and CSS are capable of nowadays.
  • Unlike a typical Darwino app, my only target is a full-on Java web server, so I'm not held back on the Java side by the capabilities, say, of Dalvik on Android 4.
  • Part of me misses the simplicity of my early PHP days, albeit not the language.

The Java Side

I decided to go with the MVC 1.0 draft spec because it lets me write extremely focused code. Here is the controller for the home page:

package controller;

import javax.inject.Inject;
import javax.mvc.Models;
import javax.mvc.annotation.Controller;
import javax.ws.rs.GET;
import javax.ws.rs.Path;

import model.PostRepository;

@Path("/")
@Controller
public class HomeController {
	@Inject
	Models models;
	
	@Inject
	PostRepository posts;
	
	@GET
	public String get() {
		models.put("posts", posts.homeList());
		
		return "home.jsp";
	}
}

Naturally, there's a lot of magic going on behind the scenes - there's tons of heavy lifting going on here by JAX-RS, MVC, CDI, JNoSQL, and Darwino - but that's the point. All the other components are off doing their jobs in their areas, while the code that provides the UI doesn't have to care about the specifics.

Things can get more complicated on the pages that actually have some functionality to them, but the code remains pleasantly focused. Take the handler for deleting posts:

@DELETE
@Path("{postId}")
@RolesAllowed("admin")
public String delete(@PathParam("postId") String postId) {
	Post post = posts.findByPostId(postId).orElseThrow(() -> new IllegalArgumentException("Unable to find post matching ID " + postId));
	posts.deleteById(post.getId());
	return "redirect:posts";
}

This adds another level of magic in the form of javax.security.annotation.RolesAllowed, but it's more of the good kind: even with no knowledge of the underlying frameworks, it's pretty clear what every bit of code is doing here. It's a refreshing bit of that Rails simplicity, just more compile-type-safe and much uglier.

Even beyond the minimal code is the cleanliness that this brings to the structure of the application: other than the img, css, and js paths, all of the routing within the application is done care of JAX-RS and MVC. It's not beholden to the folder structure in the project or to a Domino-style implicit app router.

JSP

JSP has been the prototypical Java HTML language for about 20 years, and it's had a rough upbringing. The early versions committed the PHP/XPages sin of encouraging you to put business logic right on the page, and it even still has the typical Java problem that it's tricky to find advice about using it that uses technologies added since 2005.

Still, when used properly, it can be a nice, clean templating language. Again from the main home page:

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<t:layout>
	<c:forEach items="${posts}" var="post">
		<t:post value="${post}"/>
	</c:forEach>
</t:layout>

For an XPages developer, this is extremely comfortable. It's also very refreshingly elemental: there's no server-side persistence of the page, so everything is "load-time bound" and, with just HTML tags and core JSTL tags, nothing ends up on the page that you don't explicitly put there.

Ozark, the MVC implementation, also supports using JSF "Facelets" for the view portion, but JSP suits the task just fine.

HTML + CSS

It'd been far too long since I last really sat down and looked at what baseline HTML and CSS are like - in particular, I'd watched the rise of CSS Flexbox and Grid from afar, and never gave them a shot. Using components that generate their own HTML and pre-existing CSS frameworks to target with class names is all well and good, but it does leave you a bit disconnected from the fundamentals.

So, for this iteration, I tossed aside the very-nice Bootstrap framework I've been using, dusted off one of my old hand-built ones, and got to translating it into CSS Grid. This cut down on the page size enormously: I had already echewed Dojo by not using XPages, but this now also meant that I could ditch the core bootstrap.css, jQuery, and any jQuery plugins.

Beyond CSS Grid, have you seen how nice HTML forms are nowadays? Just looking at this post reveals how much is built in in the way of validation and different input types, even before you write a line of JavaScript.

Turbolinks

Having such a trimmed-down UI means that pages already load extremely quickly, but I figured this was also a perfect chance to try out a bit of clever tech from the team at Basecamp: Turbolinks. Turbolinks is a JS file that you bring into your app which then transparently takes over your in-app links to minimize the amount of rendering you have to do. Since the surrounding boilerplate of the app usually doesn't change between requests, it can figure out the "diff" between old and new and just replace the body. It's essentially like partial refreshes without the server knowing anything about it.

It's still technically inefficient to have the server render and transfer surrounding page elements that are just going to be discarded anyway. But, on the other hand, skipping that means that I don't have to write JavaScript handlers myself, use a full CSJS app framework, or keep state on the server side. The server just keeps doing what it does with a fully context-less request and the browser sorts it out. Basecamp's programmers are masters at the targeted deployment of kludges for maximum benefit.


In the next (final?) post in the series, I'll finish up with my "low-JS" experience and other lessons learned from this project.

Reforming the Blog in Darwino, Part 3

Jul 18, 2018, 10:30 AM

Tags: java
  1. Reforming the Blog in Darwino, Part 1
  2. Cramming Rails Into A Maven Tree
  3. Reforming the Blog in Darwino, Part 2
  4. Reforming the Blog in Darwino, Part 3
  5. Reforming the Blog in Darwino, Part 4

A good while back, I created a project structure for reforming my blog here in Darwino, but, as happens with low-priority side projects, it withered on the vine, untouched since then. Beyond just the "cobbler's children" aspect to it, I also lost steam due to a couple technology paths I initially headed down.

The first was basing the UI on Angular, which I've never really enjoyed working with. I'm sure I could have ended up with a decent result with it, but Angular always rubbed me the wrong way. And not just Angular: for a dead-simple UI like this, a full JS UI is just weird overkill.

The second was off in the other direction: I initially tried cramming a Rails app in the tree, which could be made to work, but it introduced so many weird edge cases outside of the problem at hand. That alone isn't the end of the world, but not much of what I'd have to solve to make that work would be transferrable elsewhere any time soon, so it'd end up a real time sink.

So, taking what I've learned since and the projects that I've been working on, I've decided to take another swing at it. Before I get into the implementation side, it will be useful to go over the technologies I did choose for the new form.

Java/Jakarta EE

I've recently become kind of enamored with the modern form of the Jakarta EE stack, and so I decided to use this as an opportunity to really dive in to what a blue-ocean small by-the-books Java app looks like nowadays.

JEE got a well-deserved bad rap over the years for its configuration complexity and general impenetrable-ness, but I've been very pleased to find that those tides have largely receded. It's all still there if you want it, but a fresh new app primarily consists of decorating a handful of Java classes with declarative annotations.

JEE consists of a series of individual specs, and building an app involves choosing which ones you want to use, plus (depending on which you choose) picking your app server target.

Tomcat

I originally gave a shot to adding enough OSGi metadata and bundles to target Domino, but decided quickly that it was just not worth it. The HTTP/servlet stack in Domino is just so old that, even if I got everything bound together, I'd still be fighting the platform every step of the way.

The better route was to put it aside and just run a modern Java app server. I went down the list of GlassFish, Payara, WebSphere Liberty (the nearest miss), TomEE, and WildFly, but each one ended up having some problem with either the dependencies I wanted or with their Eclipse integration. I ended up settling on good ol' reliable Tomcat. Tomcat itself isn't actually a JEE server, but it's kind of like a Raspberry Pi: it gives you the baseline for a Java servlet engine, and then you can cobble together your own EE stack on top of it by explicitly bringing in implementations. Though the final .war file is far less svelte this way, I found that this build-your-own method results in the lowest chance of being held back by the platform currently.

As an aside, Sven Hasselbach has been writing a very interesting series on running Jetty on top of the Domino JVM to achieve a similar end, albeit with Spring.

Darwino

For all the same reasons as when I set out on this journey originally, I'm using Darwino for the baseline. This lets me replicate in my existing blog data smoothly while getting the advantages of a superior backing database. I'm not making use of mobile clients or most Darwino services with this, but the baseline is nonetheless a step up, and fits in with a JEE app like a glove.

JNoSQL

I brought in the JNoSQL Darwino driver I wrote a little while ago to handle the model layer. JNoSQL is essentially JPA but reformed for NoSQL access - no cruft, no relational/NoSQL impedence mismatch, and designed to fit with current JEE technologies.

CDI

CDI is one such technology, and it's a very interesting one to work with. The whole "dependency injection" realm is a little fraught and, if my Eclipse UI error reporter is any indication, prone to bizarre errors, but the core concept is good and very useful. I've gotten it into the swing of using it both as the "managed bean" provider for the front end as well as the general service provider glue for the app. It still takes some getting used to, and the learning curve falls prey to a similar problem as when I was learning Maven: something about learning how it works makes you forget what it was like to not know, and so a lot of the answers online assume way more knowledge than a neophyte has.

Bean Validation

I've long been a fan of the Java bean validation API, and it's a clean fit here too: JNoSQL picks up on the presence of Hibernate Validator without configuration beyond the dependency and it just works. No muss, no fuss.

JAX-RS + MVC Spec

JAX-RS is at this point familiar territory for a lot of Domino developers, but I decided to use it as the underpinnings of the whole UI, in tandem with a draft framework called MVC 1.0. The latter's generic name doesn't really give much detail, but it's essentially a spec that enhances JAX-RS entities with knowledge of HTML templating frameworks, allowing you to write a very clear app structure. It's not a server-state-based framework like JSF, but rather a bit "closer to the metal", where you deal directly with the HTTP method cycle.

As I'll go more into in the "UI" post, it's been surprisingly refreshing to get back to basics in this way - JSF/XPages is often a bit conceptually easier to work with (at first) and client-side JS frameworks have some REST+JSON purity to them, but just "this server-rendered HTML page with no server state is everything you need" feels really good sometimes.

Admittedly, the MVC spec itself is in a weird place. It was originally a candidate for inclusion in Java EE 8, but was dropped in the final runup. It's possible that this will prove to be a kiss of death, but the spec is so small but functional that I don't feel bad about taking the risk of building an app on it.


That about covers the technology stack. When I get around to writing the next post, I'll go into some of the specifics about how I decided to set up the UI, which has been a fun experiment of its own. In the mean time, the active repository is up at:

https://github.com/jesse-gallagher/frostillic.us-Blog/tree/develop/frostillicus-blog

A (Java-Centric) Domino Wish List

Jul 12, 2018, 12:04 PM

Tags: domino java

Seeing the information come out of this week's HCL "Golden Ticket" event has got me thinking about some of my wish-list items for Domino development, mostly in the form of enhancements for existing capabilities and entirely around Java (since that's what I do).

Quality of Life

Javadoc

For some reason, the lotus.domino classes ship without Javadoc or even variable-name information, leading to this trainwreck:

Designer has its built-in help, which is also on the web, but that's quite a few steps down. This is table stakes for a Java API and always has been.

Updated p2 Repository

Back in 2014, the XPages team uploaded a clean p2 repository of the XPages artifacts to OpenNTF, corresponding with the 9.0.1 release. This repository saves a ton of hassle when building Tycho-based projects or just setting up an Eclipse workspace. However, it's quite long the tooth, as there have been several Notes.jar additions not included in there, and, in FP10, a significant upgrade to the undergirding OSGi framework.

I ended up writing a script to generated an updated version, but I don't have the legal ability to publish the results anywhere for easy consumption, meaning it has to be done manually and configured for each build environment. It would be a great convenience if there was an official package (ideally including the Designer plugins as well) and, even better, hosted on OpenNTF so that we could reference it by URL as we do for Eclipse releases (and require users to accept a license first).

Mavenized Repository

The p2 repository is good for Tycho-based projects, but, especially when targetting Domino is only one part of a project, it can be much more convenient to use "normal" Maven projects with maven-bundle-plugin. However, those projects can't use p2 repositories as such. For Darwino's needs, I ended up writing a tool in the (available-for-free) Darwino Studio plugins to Maven-ize a Domino p2 repository, but that hits the same snag as above of requiring manual setup in each instance.

This is another case where my preference would be on the OpenNTF Maven repository (plus Javadoc Jars, naturally).

Extension Library Source

The latest Extension Library release on OpenNTF is from FP9, while the latest on GitHub is from the FP7 era. FP10 shipped with a newer version of indeterminate nature. It'd be good to have this on both of those sites and, like with Javadoc, have source bundles shipped with the product in a way that is picked up automatically by Designer and Eclipse.

Source Bundles for Third-Party Components

The source for the undergirding Equinox stack is available, but it would be best to have, as an adjunct to the updated p2 repositories, the source bundles for the actual versions used so that we don't have to cobble together a platform from Eclipse's repositories.

Open-Source the Rest of the Stack

Having XPages, the Expeditor husk, and the other miscellaneous doodads that make up the proprietary layer as open source with an Apache-compatible license would cover a lot of the above and also be of tremendous use for XPages and non-XPages apps alike that run on or with Domino. I have a hard time imagining that it would lead to a lot of community-driven improvement, but it may do some (I'd have a few words to share with the file-download control, for example), and even just as a static release would be a significant boon.

Domino Connectivity in Eclipse

An idea I've been toying with lately is to make an Eclipse plugin that allows you to add Domino servers to the "Servers" view and control them to some extent. The basics would be to start/stop/restart HTTP, but the stretch goals would be to open a console view, get a list of running modules, integrate with the existing "load bundles from PDE" support, and, ideally, an outright "Run on Server" command for OSGi bundles and NSFs. However, I have so much on my plate that I'm not sure that I'll get to this any time soon unless I get a real itch some weekend.

Longevity

Refreshed JVMs

Feature Pack 8 brought Java 8, a vital step forward. However, since then, Oracle moved to a faster release cycle for Java and the JRE is now at version 10. Domino uses IBM's JVM variant, J9, which they recently moved to the Eclipse foundation as OpenJ9, where it has... sort of been keeping up, I think?

In any event, this increased pace of change has meant that the Java 8 honeymoon is over, and Domino development again requires special consideration when using current tools. I have no idea how complex the integration between Domino's tasks and the underlying JVM is, but my ideal would be to have constant or near-constant parity.

Servlet API 4.x

After the JRE version, the most important foundational element of a Java web app is the servlet API release. The current version is 4.0.1, while Domino supports 2.5 (or 2.4, maybe?). The good news is that the Java/Jakarta EE world seems to be used to lagging versions here, and 2.5 is a minimum version for a lot of current tech in much the same way that Java 6 was until somewhat recently, but there has been quite a bit added in recent years.

Presumably, a reason for the lag is the implied requirements of newer versions, such as WebSockets and HTTP/2 support, that would require heavy modifications to the core Domino HTTP code. Honestly, the more practical route is almost definitely to just use a different JEE server paired with CrossWorlds, some Java wrapper for the GRPC stuff HCL has been talking about, or (best of all) a Darwino app replicating with Domino, but still. WebSphere Liberty is actually really nice, by the way.

Refreshed Equinox

Like with the underlying JVM, the Feature Pack 10 update to a Neon-based OSGi/Equinox framework was a critical shot in the arm for the platform, but it is now also two major versions behind. This is a little less critical, since Equinox brings a bit less to the table for our needs and Neon is "new enough" for now, particularly on the server side, but it'd still be proper to keep pace.

Odds and Ends

Non-OSGi JEE Support

The Equinox framework that Domino uses is quite capable, but there's no pretending that OSGi-targetted development has its share of headaches. Most Java apps just target plain-old .war files and don't impose any particular requirements on the build process. Java development is a much more pleasant experience when you can just toss in any Maven dependency and not have to think about building a target platform for Eclipse or jumping through bundle-resolution hoops. I really like OSGi in theory, but I can't pretend that non-OSGi development isn't much smoother.

Domino technically supports "regular" servlets currently, but, uh, here's a snippet from the current documentation on that:

Hrm.

Full Extension Manager Support for Java

JAVADDIN/DOTS added a lot of EM hooks, but it doesn't cover the full suite of capabilities that a C addin can provide, such as authentication handling. Having this be fully accessible from Java would be useful even when treating Domino just as a data store and not as an app server.

 


 

I'm sure I could come up with more, but that's probably good for now. All easy, right?

Another Project: XPages Jakarta EE Support

Jun 3, 2018, 4:40 PM

In my dealings with JNoSQL recently, I’ve been delving more into the world of modern Jakarta EE/Java EE/J2EE development, particularly the magic land of CDI.

The JEE stack tends to be organized as a collection of specs and implementations, many of which are really independent of each other and the underlying platform, making them pretty portable onto any reasonably-recent JVM. Now that Domino is actually on a reasonably-recent JVM, that makes it a workable target! So I decided to create a side project to bring some of JEE to XPages.

XPages has always been “sort of Java EE” - you don’t really have the full stack, and it’s far behind on the components that it does have, but a lot of the concepts are there. Of particular interest are managed beans and expression language.

CDI and Managed Beans

The XPages stack contains what amounts to a priomordial version of CDI. Since the release of XPages, JSF improved on the original faces-config.xml declaration method to add annotation-based declarations, and then CDI is something of a codification and expansion of that into the full Java world.

My project uses the Weld reference implementation of CDI to create a CDI context for each XPages app that opts in, allowing it to use annotations on classes to declare beans and properties:

@ApplicationScoped
@Named // or @Named("applicationGuy")
public class ApplicationGuy {
    public void getFoo() {
        return "hello";
    }
}

These can then be used like normal managed beans in an XPage:

<xp:text value="#{applicationGuy.foo}"/>

The project’s README contains some further examples.

I went with the Java SE implementation of Weld instead of the pre-built servlet or OSGi packages since those are a little too smart for this use: they pick up on the fact that they’re in a JSF environment, but expect newer versions of the servlet spec and JSF.

Expression Language

Since its original release, EL went through a similar standardization process as CDI and is now at version 3.0 and is distinct from JSP and JSF. As anyone who has tried to call a method on a bean in EL has found out, the XPages EL implementation lags pretty far behind, at the JSF 1.0/1.1 level. Since that time, it sprouted parameters and “projection” and is essentially a tiny scripting language now.

My project uses GlassFish’s EL implementation to outright replace the stock EL interpreter for apps making use of it. I added some affordances to IBM’s customized data support, so it’s intended as a drop-in replacement:

<xp:text value="${dataObjectExample.calculateFoo('some arg')}"/>

<xp:text value="#{el:requestGuy.hello()}"/> 

Note the “el:” prefix in the runtime-bound expression: that’s to get around Designer’s validation of runtime EL expressions.

So… Why?

That’s a good question! The first two reasons are “because it’s fun” and “to learn more about JEE”, but there’s also practical value for this sort of thing.

XPages is moribund, and that leaves Domino developers with a few options:

  • Go back to LotusScript. The iPad Notes client makes this a terrifyingly-practical option, but it’s soul death.
  • Go to JavaScript (or another platform). This is another route HCL is pushing, and it’s entirely valid: Node is a great platform with excellent support and momentum.
  • Go to modern Java.

For anyone who has invested a lot of time and brainpower in XPages over the years, that last one particularly appealing, and projects like this can help you get there. If you have a large XPages code base, as I do with one of my clients, it makes a lot more sense to work on that in such a way that it gradually becomes less XPage-dependent while avoiding the trap of a full rewrite in another language.

Many of us have already done something of this sort: JAX-RS is another JEE standard, and the Wink implementation in the Extension Library, though also aging, accomplishes this same sort of task. Especially if your services don’t reference Wink explicitly and write just to the spec, they are very portable.

That portability - of code and skillset - is critical. Say you have a class like this:

import javax.inject.Inject;
import javax.ws.rs.Path;
import javax.ws.rs.GET;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

@Path("/issues")
public class IssuesResource {
    @Inject IssueRepository issueRepository;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response get(@QueryParam("category") String category) {
        return issueRepository.find(category).stream()
            .map(this::doSomething)
            .skip(3)
            .collect(this::toResponse);
        }

​     // ...
}

Which Java platform is that targetting? What’s the data storage mechanism? Who cares? This class certainly doesn’t. That could just as easily be Domino reading from an NSF or (as is actually the case in the example’s source) Tomcat with Darwino.

What’s Next?

Truthfully, maybe not much. Though JEE contains a whole raft of technologies, these two were the ones that scratch my immediate itch. We’ll see, though - the skill portability of erstwhile XPages developers is critically important, and I think that this is another one of the paths that can get us where we need to go.

Compiling and Testing XPages Plugins With Java 9+

Apr 13, 2018, 3:38 PM

Tags: java xpages

Thanks to 9.0.1 FP8, we've been able to use Java 8 on Domino for a while, and FP10 makes that support a bit more official at the OSGi level. However, Java 8 is no longer the latest Java runtime, and so anyone writing XPages plugins and compiling/testing them via Maven will likely run into a situation where the compiling JRE is 9 or above. There are a couple changes in these runtimes that add some wrinkles to the process, so I took up the task of working around the problems I hit, and I created a Git repository to contain the code and tips I used to do so:

https://github.com/OpenNTF/org.openntf.domino.java9compat

The README in the repo contains a couple bits of Maven configuration that can be used to successfully compile XPages plugins in Java 9 or 10, and the included project contains a patch fragment for the com.ibm.notes.java.api plugin that serves up the Notes.jar to work around the fact that CORBA has been removed from the standard JRE.

For my needs, those changes made me able to compile a reasonably-complex XPages application, but there may be other edge cases I haven't hit yet. As I do, I'll add information and code there.

Next Project: ODP Compiler

Mar 5, 2018, 5:32 PM

Tags: xpages java
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

One of the larger thorns in my side with my Domino development lately has been trying to automate builds of on-disk projects into NSFs via Jenkins. In theory, the process is pretty straightforward. It even works sometimes! However, particularly once you add in the necessity to deploy OSGi plugins to Designer first and want to run it from Jenkins, things get extraordinarily flaky: Designer may not launch properly from a behind-the-scenes Jenkins runner, the plugin installation may mysteriously fail, and so forth - and the error reporting is difficult at best.

So it's been on my mind for a good while to find a way to get from an ODP to an NSF without involving Designer, and I decided over the last couple days to really take a swing at it. It's not a small task, though; the process involves a number of difficult steps:

  • Install and activate provided OSGi plugins
  • Create an XPages registry that knows about the XPages libraries installed on the server, including those just contributed
  • Translate XPages and Custom Controls into Java source, with intra-file knowledge of the just-added CCs
  • Create a Java classpath that matches the plug-in dependencies that Designer derives from the dependant XPages Libraries
  • Compile the resultant Java source and any Java classes in the NSF into bytecode
  • Recompose the composite data form of the file data for these elements and many file resources into their DXL ".metadata" files for import
  • Create an NSF and import all of this
  • Compile any LotusScript in source-based libraries from the ODP
  • Uninstall any hot-loaded OSGi plugins

Those steps even leave out some fiddly details, like components defined via .xsp-config files in the NSF or XSP-associated .properties files, not to mention any steps I haven't encountered yet. It's a lot of work!

My first hope was to be able to hook into the process that Designer uses, perhaps grabbing a couple pertinent OSGi plugins and going from there. However, from what I can tell, all the involved plugins are intricately tied into many layers of Designer-the-IDE and so are no small matter to use on their own without also including the entire stack. So that left me to cobble together an equivalent process out of parts.

Fortunately, a couple projects have already provided a solid foundation for this. First and foremost is the XPages Bazaar. This is a project that Philippe Riand created a number of years ago, meant to be a workshop for really experimental components in a form less contrained than the ExtLib became. Since he left IBM, it's sat unmaintained, but I figured it'd be a perfect incubator for this project, so I tossed it up on GitHub, recomposed its Maven structure, and cleaned it up a bit for FP10 use. The reason why it makes such a perfect shell is a pair of its features: an XSP interpreter and an on-the-fly Java compiler. The former hooks into the mysterious guts of the XSP runtime to allow for translation of XSP to Java and the latter wraps the official Java compiler API with some OSGi knowledge to compile that along into bytecode.

Even starting with this, the first couple steps still required a lot of digging around. I learned how to install and activate OSGi bundles, how XPages Registries work internally, and made some tweaks to work around problems I encountered. I also encountered the joy of a bizarre javac bug to do with annotations in enum constructors, which my target project used.

Once I had the XPages-side components compiled, the next step was to start composing the NSF. The ODP format for XPages elements and other "file resource"-type entities is to put the code in its "normal" form in a file and then a subset of the DXL in a ".metadata" file next to it. The trouble here is that, even for entities where the file data is stored in the note unprocessed, the storage format isn't a strict binary blob of the file data: it's a composite data stream of file header and segment structures. I thought of two main ways I could go about getting these file resources into the NSF: via IBM's NAPI and by building the structures into the DXL files before import. The NAPI has a convenient FileAccess class for this purpose (presumably used by Designer), but my attempts to use it met primarily with server crashes. I'm sure it's possible to go this route, but I'd already "solved" the DXL problem years ago, for ODA's Design API. So, at least for now, I took the tack of writing out the binary structure manually, pouring it into the DXL as Base64, and importing that. It's a little inefficient, but it works.

Overall, I've made a lot of progress so far, but there's still a lot to be done: not all file types have their data put into the right places, LotusScript isn't properly compiled, Agents don't do anything at all yet, and XPages+CCs aren't actually imported into the NSF. Still, it's in a spot where I'm confident that it can one day work, whichis more than I could have said a week ago. If you'd like, browse around the code and pitch in if it's an itch you'd like to scratch as well.

New Small Project: generate-domino-update-site

Jan 31, 2018, 9:28 PM

Tags: java xpages

For a good while now, the Domino Update Site for Build Management has been an essential tool for anyone setting up a local OSGi/Tycho development environment. However, it's really withered on the vine - the latest official release matches stock Domino 9.0.1, while recent fix packs have brought a number of improvements and new classes/methods. Since the binary code is entirely IBM's to distribute or not at their leisure, we can't update the release itself. Today's release of FP10, though, pushed me over the edge to the point where I at least wrote a tool to help the local creation of updated repositories.

The result of my work is generate-domino-update-site, a small CLI and programmatic script that you can point to a Domino installation, a destination directory, and an Eclipse program root to have it create (effectively) an updated version of the update site. The result contains a bit more than the official one did, but that should hopefully not hurt anything. Beyond just copying files into a directory, the script does the dirty work of re-packing unpacked bundles and features, vivifying the Notes.jar wrapper bundles, and creating p2 metadata (hence the Eclipse dependency).

To use the tool, you can clone the repository and run the jar as described in the readme. I may also get around to uploading it as a compiled result to a proper OpenNTF project page.

Lessons From Writing a JNoSQL Driver

Dec 30, 2017, 11:12 AM

The other day, I decided to start up a side project to write an app for my Stars Without Number game in Darwino. Like back when I wrote a forum/raiding app for my WoW guild, I like to use this kind of opportunity to try new technologies and flesh out my skills in existing ones.

One such tech I've had my eye on for a bit is JNoSQL, which is a framework for integrating with NoSQL databases in Java. It's along the lines of Hibernate OGM, but intended to avoid the pitfalls of the relational/NoSQL that came with trying to adapt JPA directly to NoSQL databases. JNoSQL promised to be much easier to implement for a new database, so I decided to give it a shot.

JNoSQL

JNoSQL is split into two paired components, cleverly named Diana (the driver side) and Artemis (the model/integration side). The task of writing a driver for a new database is pretty well-contained: pick the database type(s) you want to implement (out of key/value, column, document, and graph) and implement about half a dozen interfaces. This is in stark contrast from when I took a swing at writing a Hibernate OGM driver, where the task was significantly more daunting. The final result is only ten Java files, with a chunk of them being utility classes for code organization.

It's a young project - young enough that the best version to run right now is 0.0.4-SNAPSHOT - but it functions well and it's been taken under the wing of the Eclipse foundation, which builds some confidence.

Implementation

Though the task was small, there were still a couple initial hurdles to getting going.

To begin with, I decided to start with the Couchbase driver - this certainly made the overall task easier, since Couchbase's semantics are very similar to Darwino's, but it also meant that I had to be wary of which parts of the codebase were really about implementing a Diana driver and which were Couchbase-isms. Fortunately, this was much easier than the equivalent work when I adapted the CouchDB Hibernate OGM driver, which was a sprawling codebase by comparison.

More significantly, though, it's always tough coming in to modify a codebase written by a single person or small team and learning as you go. The structure of the code is clean, but not quite my normal style (in part because Domino kept me from diving into Java 8 streams for so long), and I also had to ramp up quickly on the internal concepts of Diana. Fortunately, this was mostly easy, since the document-DB driver scaffolding is purpose-built, the hooks are straightforward and the query semantics were extremely easy to adapt. The largest impediment was getting used to the use of the term "Document", which internally refers to a key/value pair, while "DocumentEntity" is closer to the expected meaning.

Like the core implementation, the test suite I adapted from Couchbase was also pleasantly svelte, covering the bases without being an onerous nightmare to convert. Indeed, most of the code I added to it was the Darwino app scaffolding just for the test runtime.

Putting It Into Practice

Once the driver was written, I was hit by a bit of a personal curveball when I went to implement some actual data models. The model side, Artemis, is heavily wrapped together with CDI, which is a Java EE thing that, as I gather, handles managed beans, separation of implementation, and variable injection. This is a pretty normal thing for Java EE developers, but XPages's "don't call it Java EE" environment didn't introduce me to this aspect. As such, the fact that the documentation just kind of casually tossed around CDI terms and annotations threw me for a bit of a loop trying to determine what was what was required and what was just an idiom.

I eventually determined that I could use the reference implementation, Weld, without necessarily going whole-hog on Java-EE-everything. I'm a bit wary of what this bodes for whether I'll be able to use JNoSQL in Darwino on mobile devices, but I'll cross that bridge when I come to it. Once I got a bit of a handle on what Weld is and how to use it in unit tests (hint: make sure you have beans.xml files!), I was able to start writing my model objects and testing them.

Doing It Again

The fact that the bulk of my implementation work ended up being on the app side with CDI goes to show that the Diana driver model really shines. It got me thinking about how difficult it would be in the future, say to write a driver for Domino. There'd be some hurdles - Domino's lack of nested objects and antiquated querying mechanisms would need replacing - but the core task wouldn't be too bad. I don't know if I'd have a need for it, but it's nice to keep in mind as potential future small project.

All in all, I'm optimistic about the use of this. I'd love for Darwino to integrate as smoothly as possible into whatever standard environments it can, and this is one more step in that direction. I'll know as my side app takes shape how much this ingrains itself into my actual work.

First Steps to Code Coverage Analysis in Domino Plugins

Nov 9, 2017, 8:53 AM

Tags: maven domino java

I'm always interested in getting the computer to tell me how to tell it what to do more successfully, and, to further that pursuit, I've started taking an interest in code coverage.

If you're not familiar with the term, "code coverage" refers to reporting on which lines of code were actually executed during runtime, most commonly in association with unit tests. Eclipse (and presumably other IDEs) has support for this, and I've decided to give it a shot.

Since I'm starting this out in the context of Domino plugins, there are more wrinkles than in most tutorials. Namely, the test suites I've written run exclusively through Maven instead of the Eclipse UI due to all the Notes environment setup, so I can't just use the normal UI tools to gather the data. Fortunately, Eclipse's EclEmma will work just fine with the output from a Maven project, as long as you configure it properly. I looked around for a while to find the right combination of tools to use, but it ended up being fairly simple to configure basic output that can be consumed in Eclipse's Coverage view.

There are two main additions. First, add the jacoco-maven-plugin to your root project's project.build.plugins block:

<plugin>
	<groupId>org.jacoco</groupId>
	<artifactId>jacoco-maven-plugin</artifactId>
	<version>0.7.8</version>
	<executions>
		<execution>
			<goals>
				<goal>prepare-agent</goal>
			</goals>
		</execution>
	</executions>
</plugin>

In normal cases, that would suffice. However, since the test configuration I have for Notes overrides the argLine property of the Tycho test runner, there's another step - add the tycho.testArgLine property manually into those blocks, such as in the Windows profile:

<profile>
	<activation>
		<os>
			<family>Windows</family>
		</os>
		<property>
			<name>notes-program</name>
		</property>
	</activation>

	<build>
		<plugins>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-surefire-plugin</artifactId>
				<version>${tycho-version}</version>
 
				<configuration>
					<skip>false</skip>
 
					<argLine>${tycho.testArgLine} -Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
					<environmentVariables>
						<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
					</environmentVariables>
				</configuration>
			</plugin>
		</plugins>
	</build>
</profile>

Once that's configured, running the test suite via Maven will create a new file in the target folder of the test plugin: jacoco.exec. This file can then be consumed in Eclipse by opening the "Coverage" view:

Eclipse's Show View window

In that view, right click and choose "Import Session..." and point to the data file. Click "Next" and check the projects+source folders from your workspace you're interested in analyzing. When you click "Finish", it'll do two things. First, it'll fill the Coverage view with statistics from your run:

Code Coverage stats

(We have a lot of work to do fleshing out our test suites for this one)

Secondly, it'll start highlighting your code to show you what code is executed, which branches are only partially covered, and which lines are skipped entirely. For example (ignore the sickly color scheme - I need to work on that):

Code Coverage example

This shows how several of the if branches are only tested in one direction, while the "Faces" block is skipped entirely. That also shows some of the trouble with testing XPages-run code: the Tycho environment can't reproduce the XPages environment fully, so some branches aren't testable in that way. I haven't looked into the possibility of gathering similar data from JUnit for XPages, so perhaps that's possible.

For now, though, this will have to do. And, like with these other "code improvement" techniques I've integrated lately, there's a lot of potential tedium - juggling when to write a test to cover some code that will obviously always work just to improve the highlighting vs. just focusing on the low-hanging fruit - but I expect that it will be a nice addition to my workflow over time.

That Java Thing, Part 17: My Current XPages Plug-in Dev Environment

Feb 26, 2017, 11:23 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment

It's been a while since I started this series on Java development, but I've been meaning for a bit now to crack it back open to discuss my current development setup for plug-ins, since it's changed a bit.

The biggest change is that, thanks to Serdar's work on the latest XPages SDK release, I now have Domino running plug-ins from my OS X Eclipse workspace. Previously, I switched between either running on the Mac and doing manual builds or slumming it in Eclipse in Windows. Having just the main Eclipse environment on the Mac is a surprising boost in developer happiness.

The other main change I've made is to rationalize my target platform configuration a bit. In the early parts of this series, I talked about adding the Update Site for Build Management to the active Target Platform and going from there. I still basically do this, but I'm a little more deliberate about it now. Instead of adding to the running platform, I now tend to create another platform just to avoid the temptation to use plug-ins that are from the surrounding modern Eclipse environment (this only really applies in my workspaces where I don't also have actual-Eclipse plug-in projects).

The fullest form of this occurs in one of my projects that has a private-only repo, which allows me to stash the artifacts I can't distribute publicly. In that case, I have a number of library dependencies beyond just the core XPages site, and I took the approach of writing a target platform definition file and storing it in the root project, with relative references to the packaged dependencies. With this route, I or another developer can just open the platform file and set it as the target platform - that will tell Eclipse about everything it needs. To do this, I right-clicked on the project, chose "New" → "Other..." and then "Target Definition" under "Plug-in Development":

Target Definition

Within that file, I used Eclipse variable references to point to the packaged dependencies. In this repo, there is a folder named "osgi-deps" next to the root Maven project, so I wanted to tell Eclipse to start at the root project, go up one level, and then delve down into there for each folder. I added "directory" type entries for each one:

Target Definition Entries

The reference syntax is ${workspace_loc:some-project-name}../osgi-deps/Whatever. workspace_loc resolves the absolute filesystem path of the named project within the workspace - since I don't know where the workspace will be, but I DO know the name of the project, this gets me a useful starting point. Each of those entries points to the root of a p2-format update site for the project. This setup will tell Eclipse everything it needs.

Unfortunately, this is a spot where Maven (or, more specifically, Tycho) adds a couple caveats: not only does Tycho not allow the use of "directory" type entries in a target platform file like this (meaning it can't be simply re-used), but it also expects repositories it points to to have p2 metadata and not just "plugins" and "features" folders or even a site.xml. So there's a bit of conversion involved. The good news is that Eclipse comes with a tool that will upgrade old-style update sites to p2 in-place; the bad news is that it's completely non-obvious. I have a script that I run to convert each new release of the Extension Library to this format, and I adapt it for each dependency I add:

java -jar
	/Applications/Eclipse/Eclipse.app/Contents/Eclipse/plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
	-application org.eclipse.equinox.p2.publisher.UpdateSitePublisher
	-metadataRepository file:///full/path/to/osgi-deps/ExtLib
	-artifactRepository file:///full/path/to/osgi-deps/ExtLib
	-source /full/path/to/osgi-deps/ExtLib/
	-compress -publishArtifacts

Running this for each directory will create the artifacts.jar and content.jar files Tycho needs to read the directories as repositories. The next step is to add these repositories to the root project pom so they can be resolved at build time. To start with, I create a <properties> entry in the pom to contain the base path for each folder:

<osgi-deps-path>${project.baseUri}../../../osgi-deps</osgi-deps-path>

There may be a better way to do this, but the extra "../.." in there is because this property is re-resolved for each project, and so "project.baseUri" becomes relative to each plugin, not the root project. Following the sort of best practice approach to Tycho layouts, the sub-modules in this project are in "bundles", "features", "releng", and "tests" folders, so the path needs to hop up an extra layer. With that, I add <repositories> entries for each in the same root pom:

<repositories>
    <repository>
        <id>notes</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/XPages</url>
    </repository>
    <repository>
        <id>oda</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/ODA</url>
    </repository>
    <repository>
        <id>extlib</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/ExtLib</url>
    </repository>
	<repository>
		<id>junit-xsp</id>
		<layout>p2</layout>
		<url>${osgi-deps-path}/org.openntf.junit.xsp.updatesite</url>
	</repository>
	<repository>
		<id>bazaar</id>
		<layout>p2</layout>
		<url>${osgi-deps-path}/XPagesBazaar</url>
	</repository>
	<repository>
		<id>eclipse-platform</id>
		<url>http://download.eclipse.org/releases/neon/</url>
		<layout>p2</layout>
	</repository>
</repositories>

The last entry is only needed if you have extra build-time dependencies to resolve - I use it to resolve JUnit 4.x, which for Eclipse I just tossed unstructured into a "plugins" folder in the "Misc" folder, without p2 metadata.

Though parts of this are annoyingly fiddly, it falls under the category of "worth it in the end" - after some initial trial and error, my target platform is more consistent and easier to share among multiple developers and automated build servers.

Code Safety and Pedantry

Jun 3, 2016, 10:23 AM

Tags: java

Lately, I've been musing a lot on the topic of code "correctness" - that is, beyond the normal case of wanting code to do what I intended, and further into the realm of sweating even extremely-miniscule details. A lot of this is due to my continued watching of the evolution of Apple's Swift language (I highly recommend following Erica Sadun's blog for this). Swift is very much in the camp of "make sure all your 'i's are dotted and 't's crossed" languages, as opposed to more fast-and-loose languages like JavaScript or Ruby.

I've gone back and forth on the overarching concepts from time to time. I've long been a big Ruby fan, and a lot of that is because of a general feeling that, if you let go of a lot of the "strict old aunt of a compiler" restrictions, you gain a tremendous amount of expressiveness and productivity with few real-world problems. On the other hand, being immersed in Java all the time has shifted my brain to appreciating the benefits of stronger compile-time checks (at least on paper). Overall, I'm more on the latter side than the former now, double-edged sword though it is. This is why I've been diving into things like aggressive null analysis in my code. Even when it seems like it's being a pedant, there are certain classes of bugs that it finds that I wouldn't even normally think of on the fly. For example, the null checker flags this as being a potential NPE:

if(this.foo != null) {
	this.foo.doSomething();
}

My first reaction upon seeing that was along the lines of "you're full of crap, Eclipse", but then I noticed the small path a bug could take to creep in: multithreading. If I'm in a situation where the object containing foo is used across threads, there's a possibility where Thread A would evaluate this.foo != null to true and start to step in to the block. Then, Thread B would get its turn on the processor and evaluate this.foo = null in another method. Thread A would then pick up and try to call doSomething() on the newly-minted null. So I've swallowed my pride and started writing safer code like:

SomeObject localFoo = this.foo;
if(localFoo != null) {
	localFoo.doSomething();
}

What I've always admired about Swift is that it takes these sorts of lessons to heart and adapts the syntax to suit. My interest in code-correctness pedantry in Java has led me to write out verbose abominations like this:

private final @NotNull String foo;

Three of the conceptual tokens there are purely to say things that are best practices to start with: I don't want this property accessible outside the class, I want to make sure it's assigned during construction and not change thereafter, and I want to ensure it's not null. The Swift variant is:

let foo: String

Same thing, half the typing. And, as a bonus, since nullability checking is built in to the language and not a by-convention thing like the null annotations in Java, I can be sure that the rules will be applied. That sort of thing is the dream! But, since Java is the best language for the work I'm doing for now, the important thing is that it at least suits, verbose or not. In a lot of languages, it gets much more difficult to have this sort of assurance.

So, hassle as it is, I suggest that other Domino developers, on their paths through Java, consider picking up the same habits. For every time you run into something like Java complaining that it can't convert a List<String> into a List<Object>, diving fully into null checks and immutability will save you a late-night crash report and angry user. As you develop more in Java, give it a try.

The Cleansing Flame of Null Analysis

May 21, 2016, 10:18 AM

Tags: java maven
  1. The Cleansing Flame of Null Analysis
  2. Quick Tip: JDK Null Annotations for Eclipse
  3. The Joyful Utility of Optionals in Java

Though most of my work lately has been on sprawling, platform-level stuff or other large existing codebases, part of it has involved a new small app. I decided to take this opportunity to dive more aggressively than previously into automated null analysis and other potential-bugs tools.

What I mean by "null analysis" is letting the IDE or compiler try to help you avoid NullPointerExceptions. Though there are plenty of other programming mistakes you could still make, these are among the most common, and so a little extra work up front to avoid them should pay dividends. Eclipse has some handy options in its Java → Compiler → Errors/Warnings preferences to assist with this:

The first option will pick up on some pretty basic instances, such as:

Object foo = null;
System.out.println(foo.hashCode());

Since this is clearly going to always cause an NPE, Eclipse is able to point this out as an error. The next level gets a little more nebulous: "potential" null pointer access. This crops up when Eclipse can't reliably determine whether a value will be null, either because there is no way to know at compile time (say, database access) or because the compiler's tooling is too limited. Here's a contrived example:

Object foo = Math.random() > 0.5 ? new Object() : null;
System.out.println(foo.hashCode());

This situation is clearly untenable, but there are other situations where you as a programmer can be very confident that the value will not be null (say, if you swap out the > 0.5 for >= 0.0), but the compiler doesn't know that. That's why it often makes sense to leave that as a warning instead of an error.

That's all stuff I've done before, but now I've decided to dive into annotation-based null analysis as well. Unfortunately, in stock Java, this is something of a hot mess (that list even leaves out Eclipse's home-grown version). Since Java didn't grow up with this sort of capability, it's been shoehorned in by various parties over the years. There are other tools to assist you in Java 8, but, unfortunately, I can only target 7 as the highest. For now, I've settled on the "sort-of standard" javax.validation.constraints package. It wasn't really intended for this specific purpose, but it's flexible enough to suit and can be used in Eclipse and FindBugs (though I have my reservations about the choice).

In Eclipse, this type of analysis can be enabled by checking "Enable annotation-based null analysis" below the other options and, unless you're using Eclipse's known annotations, adjusting the "Configure" options next to "Use default annotations for null specifications":

In any event, regardless of the choice of tooling, the "this shouldn't be null" annotations work the same way: you use them to decorate things that you either require not be null when provided to you (method parameters) or you promise to not be null when providing to others (method return values). For example:

public @NotNull Object doSomething(@NotNull Object otherObject) {
	return otherObject.toString();
}

This highlights three things, two good and one bad:

  • Good: The @NotNull in the method parameter means that, as long as the calling code is also checked for null use, the method can be confident that there won't be a NullPointerException when calling a method on otherObject.
  • Good: The @NotNull on the return value means that other code calling this method can be confident that they will not get a null value from it, and so can skip extra null checks.
  • Bad: Eclipse flags otherObject.toString() as a potential problem because it doesn't know for sure that Object#toString doesn't return null, because it has no nullability annotations. As programmers (or as a compiled-code analysis tool), we can be fairly confident that it will be non-null because any object returning null for that is essentially broken on its own.

That last one is a common problem when adopting annotation-based null analysis, at least in Eclipse (I hear it may be better in IntelliJ): its logic doesn't go very deep. If everything is gussied up with these annotations, you're clear - but as soon as you step outside of the project you're working on, you have to add in likely-unnecessary checks. Fortunately, these checks don't realistically hurt (a null check at runtime in a normal app is negligible performance-wise), but they can grate to have to add in.

Glutton for punishment that I am, I decided to go a step further and enable FindBugs processing as an integral step of my build. Though FindBugs can be very picky about the types of things it complains about, it is blessedly more thorough in its analysis than Eclipse, so you generally end up conceding that it is correct when it yells at you. Since the project is Maven-based, I added the check in the project's pom file:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>findbugs-maven-plugin</artifactId>
	<version>3.0.3</version>
	<configuration>
		<includeTests>true</includeTests>
	</configuration>
	<executions>
		<execution>
			<phase>compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
		<execution>
			<id>findbugs-test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
	</executions>
</plugin>

For most uses, that's all that's required. Now, when the project is compiled, FindBugs will give it a once-over and halt the build if it finds anything it doesn't like. This can be tweaked a great deal - for example, changing the checks to run or the severity of the problem needed to fail the build - but the defaults will likely suit.

Adding these extra checks involves a lot of plusses and minuses. The big minus is that you may end up spending a lot of time "fixing" bugs that don't really exist, time that you could instead spend actually writing your application (and writing new bugs that the tools won't find anyway). There's really nothing to be gained by carefully explaining to Eclipse for the hundredth time that toString always returns non-null.

Still, particularly when tested out in a small, low-surface-area app, this can be a good practice to learn and refine. Eventually, a move to Java 8 will help this more, and it certainly doesn't hurt to add in nullability annotations in the mean time. Overall, I think having the tooling help you avoid a whole suite of common "brain fart" bugs like this is worthwhile.

That Java Thing, Part 16: Maven Fallout

Feb 23, 2016, 2:33 PM

Tags: java maven
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment

So, after the last post's large task of converting to Maven, this step is mostly about picking up the pieces and expanding on some of the concepts. We'll start with M2Eclipse, usually rendered as just "m2e".

m2e

m2e is the set of plugins that acts as Eclipse's interface to Maven. It more-or-less replaces the earlier maven-eclipse-plugin, though you will likely still see references to that around. Eclipse doesn't have any inherent knowledge of how Maven works, m2e has the complicated task of reading your projects' pom.xml files and adapting them to Eclipse's internal configuration. So, for example, in our projects it saw the presence of Tycho and determined that they should be imported as OSGi projects. In other cases, m2e may pick up the presence of things like Android plugins to trigger the use of the Android development tools.

Though it tries mightily, m2e is the source of a lot of the consternation that can come with a switch to Maven-based development. Because most Maven plugins don't have any inherent allowances for working in an Eclipse environment, adapters have to be written for each one in order for them to work with m2e - this is what the dialog yesterday installing the Tycho adapters was about. In some cases, these don't exist and you have to tell m2e to ignore the plugin; in other cases, the adapters DO exist, but are flawed in some way. Most of the time, things go alright, but there are enough edge cases that it can be irritating.

For this kind of task, m2e is pretty unobtrusive, but it's important to know it's there.

Updating the .gitignore

One side effect of m2e's behavior is that it's not a good idea to remove Eclipse's project configuration files from the Git repository. This is not required, but it can avoid a number of annoying problems when dealing with multi-person Maven projects. To start with, open the .gitignore file from the root of your local Git repository (you can get to this easily using Eclipse's Git Repositories view, in the "Working Directory" part of the repo). Add some lines at the end to ignore .project and .classpath, so your whole file should now look like:

._*
Thumbs.db
.DS_Store

*.class

# Mobile Tools for Java (J2ME)
.mtj.tmp/

# Package Files #
#*.jar
*.war
*.ear

# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
hs_err_pid*

# Eclipse project files
.project
.classpath

Depending on how your (hypothetical) team wants to work, it may also make sense to ignore the .settings/ directory, which stores some additional Eclipse project information. However, some of that information may be useful to share - for example, on-save code-cleaning behavior that isn't readily expressed in Maven.

Due to the way Git works, just adding the files to the .gitignore won't remove them from the repository: instead, they'll just no longer show up in the list for new changes. In order to also remove them from the repository without deleting them from the filesystem, go to the "Navigator" pane in Eclipse (if it doesn't show up currently, you can add it via Window → Show View → Navigator), find each .project and .classpath file in the four projects (some will only have the former), right-click, and choose Team → Advanced → Untrack:

Now, commit the changes - though the files remain on the filesystem, they should show up as deleted in the