XPages Jakarta EE Support 2.2.0

Jan 11, 2022, 4:19 PM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

I've just released version 2.2.0 of the XPages Jakarta EE project, and this contains some fun additions.

Part of what makes this project satisfying to work on is the fact that it has a clear maximum scope: there are only so many Jakarta EE and MicroProfile specifications. Moreover, their delineated nature makes for satisfying progress "chunks": setting aside any later tweaks and improvements, each new spec is slotted into place in a gratifying way.

For this release, I focused on adding a number of capabilities from MicroProfile. MicroProfile is an interesting beast. Its initial and overall goal is to create a toolkit geared towards writing microservices, and it does this by taking a subset of Jakarta EE specs and then adding on its own new capabilities. Now, personally, I don't give two hoots about microservices, but the technologies added to MicroProfile aren't really specific to those needs, and most of them really just fill in gaps in the JEE lineup in ways that are just as useful to bloated monoliths as they are to microservices.

From the list, I added in five new ones, in addition to the already-present OpenAPI generator. Each one is extremely powerful and deserves a bit of discussion, I feel.

Config

The MicroProfile Config spec is a way to externalize your app's configuration and then inject configuration values via CDI. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
@ApplicationScoped
public class ConfigExample {
    @Inject
    @ConfigProperty(name="java.version")
    private String javaVersion;
    
    @Inject
    @ConfigProperty(name="xsp.library.depends")
    private String xspDepends;
    
    /* use the above */
}

When this bean is used, the javaVersion value will be populated with the java.version system property and xspDepends will get the list of libraries used by the app from Xsp Properties. The latter also shows the pluggable nature of the spec: as part of adding it to the project, I added a custom configuration source to read from the Xsp Properties of the app, as well as one to read from notes.ini. One could in theory (and I absolutely will) write a custom source to read configuration from a view or other Notes-type source.

Rest Client

One of the ways that Domino has long been deficient has been in accessing remote REST services. There's some stuff in the ExtLib, I think, and then there are HTTP primitives in LotusScript, but they don't really do much work for you. Java on its own has URLConnection and friends, and that works, but it's basically as low level as LotusScript.

What the Rest Client spec does is build on familiar Jakarta REST annotations (@Path, @GET, etc.) to allow you to declare how your service works and then access it like a normal Java object. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
@ApplicationScoped
public class RestClientExample {
    public static class JsonExampleObject {
        private String foo;
        
        public String getFoo() {
            return foo;
        }
        public void setFoo(String foo) {
            this.foo = foo;
        }
    }
    
    @Path("someservice")
    public interface JsonExampleService {
        @GET
        @Produces(MediaType.APPLICATION_JSON)
        JsonExampleObject get(@QueryParam("foo") String foo);
    }
    
    public Object get() {
        URI serviceUri = URI.create("http://example.com/api/v1/");
        JsonExampleService service = RestClientBuilder.newBuilder()
            .baseUri(serviceUri)
            .build(JsonExampleService.class);
        JsonExampleObject responseObj = service.get("some value");
        Map<String, Object> result = new LinkedHashMap<>();
        result.put("called", serviceUri);
        result.put("response", responseObj);
        return result;
    }
}

Here, I've defined an imaginary remote resource available at a URL like "http://example.com/api/v1/someservice?foo=some%20value" and created an interface with REST annotations to define its expected behavior. The MP Rest Client takes it from there: it converts provided arguments to the expected data types (query parameters, body as JSON/XML, etc.), makes the HTTP call, parses the response into the expected type, and returns it to you, while you get to use it like any old Java object.

It's extremely convenient.

Fault Tolerance

The Fault Tolerance spec allows you to add annotations to methods to handle and describe failure situations. That's pretty vague, but an example may help:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@ApplicationScoped
public class FaultToleranceBean {
    @Retry(maxRetries = 2)
    @Fallback(fallbackMethod = "getFailingFallback")
    public String getFailing() {
        throw new RuntimeException("this is expected to fail");
    }
    
    private String getFailingFallback() {
        return "I am the fallback response.";
    }
    
    @Timeout(value=5, unit=ChronoUnit.MILLIS)
    public String getTimeout() throws InterruptedException {
        TimeUnit.MILLISECONDS.sleep(10);
        return "I should have stopped.";
    }
    
    @CircuitBreaker(delay=60000, requestVolumeThreshold=2)
    public String getCircuitBreaker() {
        throw new RuntimeException("I am a circuit-breaking failure - I should stop after two attempts");
    }
}

There are three capabilities in use here:

  • @Retry and @Fallback allow you to specify that a method that throws an exception should be automatically re-tried X number of times and then, if it still fails, the caller should instead call a different method. This could make sense when calling a remote service that may be intermittently unavailable.
  • @Timeout allows you to specify a maximum amount of time that a method should be allowed to run. If execution exceeds that amount, then it will throw an InterruptedException. This could make sense when performing a task that normally takes a short amount of time, but has a known failure state where it stalls - again, calling a remote service is a natural case for this.
  • @CircuitBreaker allows you to put a cap on the number of times per X milliseconds that a method is called when it fails. This could be useful for a situation where a method might fail for a user-generated reason (say, a user attempted to modify a document they can't edit) but also might fail for a systemic reason (say, a DB is corrupt) and where repeated attempts to perform the task might be damaging or otherwise very undesirable.

The cool thing about these annotations is the way they make use of CDI's capabilities. If you @Inject this bean into another class, you'll simply call bean.getFailing(), etc., and the stack will handle actually enforcing the retry and fallback behavior for you. You don't have to write any code to handle these checks beyond the annotations.

Metrics

The Metrics API allows you to annotate your objects to tell the runtime to track statistics about the call, such as invocation count and execution time. For example:

1
2
3
4
5
@GET
@SimplyTimed
public Response hello() {
    /* Perform the work */
}

Once the method is marked as @SimplyTimed, you can retrieve statistics at the /xsp/app/metrics endpoint in your NSF, which will get you a bunch of information, including lines like this:

1
2
3
4
# TYPE application_rest_Sample_hello_total counter
application_rest_Sample_hello_total 2.0
# TYPE application_rest_Sample_hello_elapsedTime_seconds gauge
application_rest_Sample_hello_elapsedTime_seconds 0.0025678

That's OpenMetrics format, which means you could consume this data in visualizer tools readily.

Health

And speaking of data for visualizers, that brings us to the last MP spec I added in this version: Health. This API allows you to write classes that will be used to query the health of your application: whether individual components are up or down, whether the app is started ready to receive requests, and any custom attributes you want to define. Now, admittedly, an app on Domino will usually either be "100% up" or "catastrophically down", so making use of this spec will take a little finesse. Still, it's not too hard to envision using this to emit some dashboard-type information. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
@ApplicationScoped
@Liveness
public class PassingHealthCheck implements HealthCheck {
    @Override
    public HealthCheckResponse call() {
        HealthCheckResponseBuilder response = HealthCheckResponse.named("I am the liveliness check");
        try {
            Database database = NotesContext.getCurrent().getCurrentDatabase();
            NoteCollection notes = database.createNoteCollection(true);
            notes.buildCollection();
            return response
                .status(true)
                .withData("noteCount", notes.getCount())
                .build();
        } catch(NotesException e) {
            return response
                .status(false)
                .withData("exception", e.text)
                .build();
        }
    }
}

While "count of notes in an NSF" isn't usually too useful, you can imagine replacing that with something more app-specific: open support tickets, pending vacation requests, or the like. You could also combine this with the Rest Client spec to make a coordinating NSF with no business logic that makes calls to your other, more-likely-to-break apps to check their health and report it here.

Once you write these checks, the runtime will automatically pick up on them and make them available at /xsp/app/health and some sub-paths:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
    "status": "DOWN",
    "checks": [
        {
            "name": "I am the liveliness check",
            "status": "UP",
            "data": {
                "noteCount": 63
            }
        },
        {
            "name": "I am a failing readiness check",
            "status": "DOWN"
        },
        {
            "name": "started up fine",
            "status": "UP"
        }
    ]
}

Other Improvements

Beyond those major additions, I made some improvements to clean some aspects up and refine some details (such as making REST services emit stack traces as JSON in the response instead of printing to the console).

Feature-wise, the main addition of note is support for the @RolesAllowed annotation on REST services, which allows you to restrict access by Notes-ACL-type name:

1
2
3
4
5
@GET
@RolesAllowed({ "*/O=SomeOrg", "LocalDomainAdmins", "[Admin]" })
public Object get() {
    // ...
}

When a user doesn't match one of those names-list entries, they're given a 401 response. You can also use the pseudo-name "login" to require that the user be at least logged in and not Anonymous.

What's Next?

Well, I'm not sure. All of the above have immediate uses in my work, so I plan to get rid of some of our old workarounds for this sort of thing and bring in the new abilities.

Beyond that, there are two shipped MicroProfile specs remaining: OpenTracing (which I don't know what it is, but maybe it's useful) and JWT Propagation (which would only make sense when paired with a whole JWT thing).

There's also MP GraphQL, which seems to be like MVC in Jakarta EE, where it's a solid spec but not part of the official standard yet. I may take a swing at it just because it may be easy to add, though for my client needs we already have a more-dynamic GraphQL implementation.

Back on the Jakarta side, the spec list contains quite a few that aren't in there, though a lot of those are either essentially not applicable to Domino (like Authentication or WebSocket) or are primarily of interest to legacy applications (like Enterprise Beans or XML services).

I'd also like to do some breaking reorganization. Most of these specs are added as individual XPages libraries, but that's gotten really unwieldy. Moreover, I'm not sure what the situation would be where you'd want to include, say, REST but not CDI. I'll probably look into making it a single "make my dev experience good" library to select. That'll take some work, though, since there's a lot of OSGi-dependency stuff to balance there.

Beyond that, it'll be refinements, bug fixes, and improvements for use in OSGi-based apps. I have a bunch of things tracked and that'll give me plenty to do as I have time.

Intercepting Class Loading in OSGi, A Travelogue

Jan 10, 2022, 10:36 AM

Tags: java osgi xpages

Yesterday, I had a problem. I was trying to get MicroProfile Config working inside an NSF to add to the XPages Jakarta EE project, and I was severely blocked by odd behavior.

To describe that, I'll lightly cover what MP Config is. It's a CDI extension that allows you to annotate properties on a bean to indicate that they're intended to come from an available configuration source - often a .properties file in the project, but it's a pluggable system. Your bean will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
package example;

// ...

@ApplicationScoped
public class ConfigExample {
    @Inject
    @ConfigProperty(name="java.version", defaultValue="unknown")
    private String javaVersion;
    
    /* other methods go here */
}

The idea is that you'll then have a properties file or environment variable to fill in the value, allowing you to separate your configuration from the implementation in a consistent way. Here, I'm making use of the fact that a default provider looks up Java system properties, so I could just get it working before investigating adding providers.

Since I'd already added CDI and a CDI-based extension in the form of MVC, I figured this would be easy.

The Problem

The problem I hit, though, was bizarre. CDI would identify the bean above, but would hit this problem:

1
2
3
4
5
org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type String with qualifiers @Default
  at injection point [UnbackedAnnotatedField] @Inject private example.ConfigExample.javaVersion
  at example.ConfigExample.javaVersion(ConfigExample.java:0)
WELD-001475: The following beans match by type, but none have matching qualifiers:
  - Producer Method [String] with qualifiers [@Any @ConfigProperty] declared as [[UnbackedAnnotatedMethod] @Dependent @Produces @ConfigProperty protected io.smallrye.config.inject.ConfigProducer.produceStringConfigProperty(InjectionPoint)]

The gist of this is that it noticed that the javaVersion property is supposed to be an injected property, but it had no idea what the source should be. It did know about the MicroProfile provider, which handles @ConfigProperty, but it couldn't put two and two together.

I banged my head against this for a while, and eventually determined that the class as loaded from the NSF is stripped of the @ConfigProperty annotation outright. Other annotations, such as @Inject and even custom annotations, would remain, but not @ConfigProperty. I wrestled with OSGi dependency chains for a while, to no avail.

The Enemy

Eventually, I found the core, and it was an old nemesis of mine. It's this method in com.ibm.xsp.util.ClassLoaderUtil:

ClassLoaderUtil.checkProhibitedClassNames

This field is called by the ClassLoader used in an NSF to ensure that certain classes, by name prefix, cannot be loaded by code coming from an NSF. The last three lines there make a sort of sense: Domino is supposed to be an app container for XPages apps, and ideally it's not a simple process for an app to break out of its container to muck about in the parent environment. Fair enough. The NAPI line is presumably there because IBM wanted to protect developers from themselves, even though Notes devs had been making unauthenticated calls to C APIs for freaking ever.

It's the first two, and specifically the first, that are the source of my trouble. Those prohibitions are presumably meant to isolate XPages apps from the fact that they live in an OSGi world, with the assumption that anything beginning with org.eclipse. refers to something like org.eclipse.core.runtime, the OSGi system bundle.

And this is the issue. MicroProfile is not in any way related to OSGi, but it sure is an Eclipse project. Accordingly, the class name of @ConfigProperty is org.eclipse.microprofile.config.inject.ConfigProperty, and thus cannot be loaded from an NSF.

Attempted Workarounds

So I considered my options.

One was to fork MP Config and rename the packages. That would work, but it would defeat the portability goals of the XPages JEE project, and would also just be a hassle - I've already had to fork a few specs, and each new one adds to the maintenance burden. That remained an option, but it would be a last resort.

My next idea was to wrap around the ModuleClassLoader class used by NSFComponentModule for class loading purposes. This class is blessedly non-final, and so in theory I could look at the instance in a module and swap it out with a replacement. I tinkered with this a bit, but the trouble became the way it's layered, with a DynamicClassLoader private class within - something harder to subclass. In theory, I could reproduce the behavior of it wholesale, but that would be both fragile (if the implementation ever changes) but also verging on if not outright illegal (it's one thing to be API-compatible, and another to reproduce the internal functionality). After some wrangling, I decided to look elsewhere.

The True Workaround

I realized eventually that I don't really care about ModuleClassLoader as such: it does its job fine, and it's only the response that it gets from ClassLoaderUtil that is the problem. If I could change that, I would be set.

I've used the Javassist project here and there for a long time, ever since its inclusion in ODA for one reason or another. It's a handy toolkit, and notably includes the capability to alter a method implementation on the fly. There's my loophole.

The reason this kind of thing can work is related to how Java handles classes and calls between them. For all intents and purposes, you can consider a method call from one bit of Java to another to be a string-based lookup, saying "find a class named X and a method named Y, and then execute it". The "find" part there is much looser than you might think. It's easy to think of class references like C static linking, but they're really not. When code asks for a class, it asks the context ClassLoader, and that object can do basically whatever the heck it wants to find it, as long as it eventually emits a Class that the runtime can deal with.

Javassist's manipulation makes use of the fact that classes are generally eventually just a bunch of bytes, and you can do whatever you want with a bunch of bytes. Using Javassist, it's fairly simple to, once you have a handle on the class, alter the method. Truncated, that's:

1
2
3
4
5
6
ClassPool pool = /* build a ClassPool that can load the class */;
CtClass cc = /* get the class from the pool */;
cc.defrost();
CtMethod m = cc.getDeclaredMethod("checkProhibitedClassNames");
m.setBody("{ return false; }");
Class<?> result = cc.toClass();

And this works, as far as it goes: I now have a Class version of ClassLoaderUtil that skips the onerous check.

The trouble now was to get this to be actually used by other classes. Generally, once a ClassLoader loads a class, it's difficult to feed it another version unless it's designed to do so: most ClassLoader implementations, including those used here, are designed to read and emit classes by their own rules, not have new data fed into them.

I tried digging through the Eclipse OSGi ModuleClassLoader (distinct from the NSF ModuleClassLoader) for entrypoints and had some initially-promising work with Eclipse's internal ClassLoaderHook type, but eventually determined that this would require more patching than I'd want, if it was possible at all.

I also considered using Java's instrumentation capabilities to intercept class loading, but that would require setting up a special Java agent in the launch parameters, which would be too onerous.

But then I remembered something I had heard about when looking into getting ServiceLoader to work with OSGi: a concept in the OSGi spec called "weaving".

OSGi Weaving

I had noted that this concept existed, but set it aside in large part due to how esoteric it sounds: the term "weaving" makes it sound like it's a way to interact with the threads of fate or something, which is evocative but not something that seems immediately useful.

What it really is, though, is an OSGi-friendly version of the above: when the OSGi runtime goes to load a class from a bundle, it reads the data but then gives any such listeners an opportunity to manipulate the code before it's actually reified into a class. This is how the ServiceLoader mediator does its thing: it looks for ServiceLoader calls during loading and re-"weaves" them to run through OSGi instead.

This was perfect: it provides exactly the hook I want and it does it in a clean, spec-based way, without having to do weird reflection to reassign object properties or anything.

The Implementation

So I went about writing such an implementation. All the pieces are there on Domino, and the mechanism for registering a WeavingHook is something I'd done before in Open Liberty: it's a type of OSGi service that you can register and manage in an Activator class. It's also the sort of thing that would work well with Declarative Services, but Domino doesn't have a DS handler installed and I figured I didn't need to solve that quite yet.

So I wrote a WeavingHook implementation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public class UtilWeavingHook implements WeavingHook {
    @Override
    public void weave(WovenClass c) {
        if("com.ibm.xsp.util.ClassLoaderUtil".equals(c.getClassName())) {
            ClassPool pool = new ClassPool();
            pool.appendClassPath(new LoaderClassPath(ClassLoader.getSystemClassLoader()));
            CtClass cc;
            try(InputStream is = new ByteArrayInputStream(c.getBytes())) {
                cc = pool.makeClass(is);
            } catch (IOException e) {
                throw new UncheckedIOException(e);
            }
            cc.defrost();
            try {
                CtMethod m = cc.getDeclaredMethod("checkProhibitedClassNames");
                m.setBody("{ return false; }");
                c.setBytes(cc.toBytecode());
            } catch(NotFoundException | CannotCompileException | IOException e) {
                new RuntimeException("Encountered exception when weaving ClassLoaderUtil replacement", e).printStackTrace();
            }
        }
    }
}

This builds on the above Javassist usage to now load the class from the byte array provided by OSGi, transform it, and then write the new version back. Since this happens while OSGi is reading the class to begin with, there's never a time when there's an older, less-permissive version of the class running around, as long as I get my service in early enough.

This service is registered in the Activator without too much fuss:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public class JakartaActivator implements BundleActivator {
    private final List<ServiceRegistration<?>> regs = new ArrayList<>();

    @Override
    public void start(BundleContext context) throws Exception {
        regs.add(context.registerService(WeavingHook.class.getName(), new UtilWeavingHook(), null));
    }

    @Override
    public void stop(BundleContext context) throws Exception {
        regs.forEach(ServiceRegistration::unregister);
        regs.clear();
    }
}

The final bit to get right was the "get the service in early enough" aspect. The main task was making sure that this bundle was activated before any XPages apps loaded, and that was a job for my old friend IServiceFactory, which is the extension point that's intended to add handlers for incoming URLs but has the desirable attribute of being initialized right at the very start of HTTP loading.

With this in place, I now have a fix automatically applied to that fiendish class on load, and MicroProfile Config (and future MP specs) works like a charm.

Conclusion

This was an arduous one, and I think the FTL victory jingle actually physically played when I got it working. I've hated this restriction for a long time, and I'm glad to finally have a workaround.

It was also enlightening to properly learn about OSGi's weaving capability. As mentioned above, this is what the ServiceLoader bridge does, and I'd tinkered with that at one point, but never got it working. I suspect now that it should be entirely doable to make it work, most likely also involving bringing in an implementation of the Declarative Services OSGi capability. That should be a fun project in its own right.

Moreover, the fact that I now have a system in place to do this weaving on the fly means that I may be able to un-fork some of the specs I had to fork to get working previously, which specifically required altering ServiceLoader calls. Even if I don't get the official service bridge in, perhaps I can use this technique to just alter the parts I need to on the fly, and otherwise use stock implementations from Maven.

But, for now, the way is cleared for further progress, and a bizarre mystery is solved. I call that a good day.

Migrating a Large XPages App to Jakarta EE 9

Jan 6, 2022, 7:44 PM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

Last month, I moved my XPages Jakarta EE project to JEE 9, which involved the large hurdle of switching package names from javax.* to jakarta.*. That was all well and good for the project and opened the door to further improvements, but it's one thing to do it in a support library and another to move an actual large project over.

So I set my sights on my workhorse client project, the one with the sprawling OSGi bundle set and complicated XPages project, which I have running regularly on both Domino and Open Liberty. Over the last few days, I did the porting work and came out successful, and I think it ended up being another tale worth telling.

There are two main topics when it comes to this project: why and how.

Why

Now, why are we gonna do this? I mean, the app is running fine as it is, and the immediate goal of the switch is to keep it functionally the same. Why is it worth going through this hassle for an active project?

There are a few reasons.

The first and biggest is that the jakarta.* switch isn't going to go backwards. There is never going to be a feature update for any of the javax.* versions of the JEE specs, and so staying on that version is stagnation. While we can do what we want to do today, that won't hold true tomorrow unless we make the move. Since that's inevitable, it meant that every new line of javax.* code is technical debt, and the sooner you can stop creating that, the better.

The second reason is that, while the Jakarta EE spec move from 8 to 9 retained the same functionality, we actually gained technical improvements in the switch.

One technical gain was very immediate: the version of RESTEasy I had put into the XPages JEE project previously was a couple major versions old now, and this let me bump it up to the latest.

Another technical gain had to do with the nightmare that is dealing with OSGi dependencies on Domino. Due to the lack of proper versioning of standard specs in the XPages stack, the increasingly-corrupt classpath, and the bundling of specs with utility libraries, I've always had to lose a lot of time dealing with manually tweaking OSGi imports and dependencies. While this move doesn't eliminate all of it, it removes a lot. Terror packages like javax.mail and javax.activation can now be left behind in favor of jakarta.mail and jakarta.activation without worry of conflicting imports from the XPages libraries and the ndext classpath directory. The move to JEE 9 resulted in a significant reduction in such code and configuration.

And finally, it's going to help bring in new features we haven't introduced yet but will benefit from. For example, I have my eye on the MicroProfile REST client, which makes consuming REST services in a clear and type-safe way a dream. While it's possible that I'd be able to add that in earlier versions, the switch to jakarta.* will remove huge tasks from my plate entirely.

How

So now that I'd convinced myself that it's a good idea, the only remaining problem was actually doing it. Fortunately, I already solved most of it in the XPages JEE project itself. Moreover, the way I'd solved things there allowed me to remove extra dependencies that kind of "double-solved" the problem in this specific app, things like making sure that javax.inject was compatible between that project and the other third-party dependencies we use.

One of the big things that was different between the XPages JEE project on its own and this app was the way this runs across multiple platforms.

Historically, the fact that Liberty was running Servlet 4 and Domino was running Servlet 2.4/2.5 didn't come much into play. The newer versions of the javax.servlet classes were entirely compatible with the old ones: XPages could consume a Servlet 4 HttpServletRequest without issue and would just ignore the new methods added. Now, though, I had to do some shimming in two directions.

Domino, since I don't control the lowest layers, would remain a Servlet 2.5-ish container natively, while for Liberty I would move to a true Servlet 5 container. Since both the XPages markup and app code must remain the same, that meant a double emulation setup:

Diagram of the Domino and Liberty Stacks

In these diagrams, "JEE 9 App" represents the app's use of Jakarta standards other than XPages: CDI, JAX-RS, JSON-B, and so forth.

The first part of this work took place in the XPages Jakarta EE project. There, I created wrapper versions of all pertinent javax.servlet/jakarta.servlet classes going both from "old to new" and "new to old". Most of these classes are just direct delegations, but some parts involve either throwing exceptions for trying to use new features on an old stack, emulating new behavior on top of the old, or quietly not supporting some capabilities. These classes get me most of the way. When I need to move in one direction or the other, I call the appropriate method from ServletUtil and it takes care of the differences. This project also handled a lot of fiddly details to do with things like the JavaMail to Jakarta Mail switch, so I could just bring that in too and not worry about it.

That handled some distinctions. The next big one was to use this to make sure XPages can survive in a Servlet 5 world. The good news here is that it was simpler than I thought it would be. XPages only has a few actual entrypoints - there's a Servlet to handle global resources (the URLs involving /xsp/.ibm* and the like), another to handle actual XPages *.xsp requests, and maybe one or two others I'm not thinking of. As long as you can handle those URLs and send a legacy javax.servlet object to the closed-source code, the stack will take it from there: things like externalContext.getRequest() will return the wrapped object you passed in in the first place, and don't try to do any weird magic to fetch the request object from the container or anything.

Previously, I had Servlets that extended the stock ones like DesignerFacesServlet directly, but this had to change. Fortunately, it's a simple matter of delegation. Instead of subclassing the original ones, now I create an instance internally and pass along incoming requests to that, appropriately dumbing down the Servlet 5 objects to 2.5-level ones. I was able to do this with fewer functional changes than one might think and put it into new versions of the XPages Runtime project.

Beyond those big-ticket items, there were a few cases where I had to take care to appropriately handle knowing when my code was going to receive a javax.servlet object or a jakarta.servlet one, but otherwise there wasn't much to do beyond updating dependencies and a find-and-replace on class imports. None of the XSP markup changed, very little of the in-NSF code changed, and the only large in-app code changes I made were to remove workarounds that were no longer needed.

Conclusion

All in all, I'd say this went better than I'd expected. There's naturally still room for trouble (it's not in production yet, for one), but overall I feel that this bore out my intent in making the move in the first place.

It's also an interesting case study in the way my XPages Jakarta EE and Open Liberty Runtime projects conceptually interact. They both approach the same ideal from different directions: write open-standards-based applications that make use of Domino data. With this move to JEE 9, the addition of new specs to the XPages JEE project, and the shedding of (some) old limitations, they're converging all the more. More code can be directly shared between the two app types, and the code that isn't shared unmodified can at least be written all the more similarly. JAX-RS is the in-common way to do REST services; CDI is the in-common way to do managed beans; OpenAPI annotations are an in-common way to document services. These are technologies that have wide support with multiple implementations and, crucially, are open standards. XPages was one step towards a non-proprietary stack, and this is several more.

My 2021 Open-Source Year

Dec 31, 2021, 4:34 PM

For the last few weeks, I had a minor flurry of work in a couple of the open-source projects I maintain, and I figured this would be as good a time as any to give an overview of my active work in these projects and how they relate.

Overview

I had a few minor contributions and picked-up projects through the year, but most of my currently-public work went towards four main projects:

I do find it interesting to consider how these relate. Some aspects are easy: they're all Domino-related for sure, and they all at one time or another have played a significant infrastructural role in my client work. Beyond that, though, they form a nebulous message: though I don't know for sure what to do with all the XSP markup we have, I know it can't be the status quo and I'm fairly confident that Jakarta EE is the best route forward.

Domino Open Liberty Runtime

This project allows you to run instances of Open Liberty as a spawned process from Domino, which in turn means both that you can readily(-ish) access Domino data and also that you can deploy these apps in an NSF-based way to your servers, without having to have particular mastery of Liberty administration as such.

The big-ticket news this year was my addition of a Domino-hosted reverse proxy and arbitrary JVM selection. With these additions, the project ended up being a particularly-compelling way to glom modern apps on to Domino without even necessarily worrying about pointing to a different port. I also added in the standalone proxy to both the apps and Domino - which would gain you Web Sockets and HTTP/2 - which is another nice way to get better app toolkits without having to bother an admin.

XPages Jakarta EE Support

This one saw a burst of activity in just this past month. For a while, it had sat receiving only minor tweaks: I use it for EL, CDI, and JAX-RS in my client project, and the changes I made were generally just to add features or fix bugs needed there.

This month saw the big switch from Java/Jakarta EE 8 (javax.* packages) to Jakarta EE 9 (jakarta.* packages). This was a very-interesting prospect: though it on paper just involved switching class names around, it necessitated adding some Servlet 5 shims around Domino's irreponsibly-old Servlet 2.4/2.5 hybrid layer. While this didn't bring full Servlet 5 features, it does mean I'm suddenly much less bound by the strictures of the older version: a lot of Servlet-based software casually depends on at least 3 even for just convenience methods (like getting a ServletContext from a ServletRequest).

I also took the opportunity to go back and add some features I've long wanted - JSP and MVC - to the NSF side. These have less immediate call in my client work (which primarily involves additions on the OSGi servlet layer and less in the NSF), but suddenly created a surprisingly-compelling update to in-NSF development. It's stymied by, naturally, a lack of support in Designer, but the idea of writing something that approaches a true modern Jakarta app inside an NSF is intriguing indeed.

NSF ODP Tooling

The NSF ODP Tooling has proven to be my workhorse. The ODP-to-NSF compilation alone has saved me countless hours from the previous laborious task of syncing two dozen NSFs with their ODPs and the fault-prone process of trying to get clean NTF copies of them for each build. Now, the former is done with a single script I can run in the background and the latter happens automatically every single push to our Git repository. Delicious.

It also provides an invaluable part of my normal development process for this client. Alongside the next project, it lets me do my XPages development outside of Designer, meaning I only need to schlep my way back to that IDE to look at legacy elements in context or to troubleshoot something with the Notes or Domino OSGi view of the world.

The work in this project this year has primarily been around edge cases, bug fixes, and scrambling through the rocky shoals of the ever-changing macOS Notes client. It's been a tough time here and there: certain parts of the NSF that I use less frequently have their own edge-case needs (like SSJS sort of existing in two places and the CD storage being surprisingly difficult to work with. I also had some fun combat with filesystems and charsets, which was fortunately even-more enlightening than it was annoying.

XPages Runtime

The XPages Runtime project admittedly had a slow year, but it's nonetheless a critical component in my CI/CD workflow, and gets periodic fixes for trouble I run into. The good news there is that it generally does what it promises: I run XPages outside of Domino constantly with this thing. Though it still requires more coordination on the app side than I'd like, it's gradually approaching a state where it feels like a peer to other server-side toolkits that one can bring into a WAR file, and that's nice.

It will likely have some work coming up in the near future, though: if I'm to move my client's app over to the jakarta.* namespace, that will require at least some level of cooperation with this project. While I can't change the source of XPages to accept these coordinates itself, it should be doable to do much like what I did with the XPages Jakarta EE Support project and use my shims to translate back and forth between old and new classes. The main difference here will be that the surrounding container will speak the new form natively, but that should be fine.

I expect a certain amount of annoying trouble with things like XPages-internal expectations about JAX-B and JavaMail, and it's certainly possible that such dependencies will end up proving to be debilitating, but I'm optimistic. If I'm successful, it'll be one more way that I'm crafting a whole workflow where modern technologies are the primary target and XPages can remain a component in the lineup.

Miscellaneous Grab Bag

Beyond those big ones, I had a handful of other contributions here and there. I'm sure there were a few others, but I'll close on two that I found pleasing.

The other week, I got a Pull Request merged into the Eclipse Krazo project - while not a huge deal, it does always give me a little thrill when my code goes into a project where I'm not the primary or sole contributor.

I also adopted POI4XPages, which was for more-practical reasons. I've used POI4XPages for a couple clients for a while, but it was certainly showing its age (sitting at 3.x since 2017). Moreover, Notes 11's corruption of its classpath with POI 4.x made working with it annoying beyond just being out-of-date and lacking some breaking changes in the mean time. Since I had moved one of my clients to POI 5.0 a bit ago, I decided to break that code out and adapt it into POI4XPages. Then, of course, along came Log4Shell and I scrambled out three subsequent patch versions just to update log4j. So it goes.

JSP and MVC Support in the XPages JEE Project

Dec 20, 2021, 11:20 AM

Tags: jakartaee java
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

Over the weekend, I wrapped up the transition to jakarta.* for my XPages JEE Support project and uploaded it to OpenNTF.

With that in the bag, I decided to investigate adding some other things that I had been itching to get working for a while now: JSP and MVC.

JSP? Isn't That, Like, A Billion Years Old?

Okay, first: shut up.

Expanding on that point, it is indeed pretty old - arriving in 1999 - and its early form was pretty bad. It was designed as an answer to things like PHP and ASP and bore all those scars: it used actual Java syntax on the page to control output, looping, conditionals, and the like. It even had special directives to import Java classes for the page! All that stuff is still in there, too, which isn't great.

However, JSP used judiciously - focusing on JSTL tags for control/looping and EL references to CDI beans for data access - is a splendid little thing, and it has the advantage that it remains part of the JEE spec.

Domino flirted with JSP for a long time. It's what Garnet was all about and was part of how OpenNTF got off the ground. IBM did eventually ship the custom tags, and they ship with Domino to this day, sitting in the data/domino/java directory, gathering dust. Domino also inherited JSP from WebSphere as part of XPages... kind of. It has hooks for using JSP files in Expeditor-container webapps, but the implementation is conspicuously missing - present only in Notes, presumably for some sort of Social nonsense reason.

For better or for worse, none of that matters now anyway: it's all crusty and old and, critically, uses javax.*. I had to go a different route.

JSP Implementation

From what I gather, there's basically only one real open-source JSP implementation: Jasper, which is a part of Tomcat. Basically everyone just uses that, and that works well enough. There are various re-bundlings of it to remove the Tomcat dependencies, and I went with the GlassFish one, since it was pretty clean.

Diving into it, there were a few things that were potential and actual problems.

First, JSP files aren't evaluated directly. Instead, they're compiled into Servlet class implementations, either on the fly or ahead of time. This process is basically the same as how XPages work: the JSP is translated into a Java file, which is then compiled into a class, which is then reused by the runtime for subsequent requests. Jasper has a dependency on Eclipse JDT, which worried me: when I looked into this in the past, I found that JDT (at least how it was used for JSP) makes a lot of assumptions about working with the actual filesystem. I lucked out here, though: Jasper actually uses the JavaCompiler API, which is more flexible. The JDT dependency seems like either a vestige of an older version or a fallback option.

However, despite the fact that JavaCompiler can work purely in memory, Jasper does do a lot of filesystem-bound work when it comes to loading tag libraries, such as JSTL. I ended up having to deploy a bunch of stuff to the filesystem. Ideally, I'll find a better way around this.

Hooking It Up To Domino

Having a JSP interpreter is one thing, but having it respond to URLs like "http://example.com/foo.nsf/bar.jsp" is another, especially if that should also participate in the XPages class space of the NSF.

I originally considered an HttpService implementation that would accept incoming *.jsp URLs. This could work, but it would be less than ideal: the HttpService, while working in the XPages OSGi layer, wouldn't know about the internal layout of the NSF. I'd have to either reinvent it or wrangle my way over to the active NSFService (the one that runs XPages), find or load the NSF's module, and root around in there. Possible, but not ideal.

Fortunately, I lucked out tremendously: the NSFService class has an addHandledExtensions static method that I can just call to tell it that incoming ".jsp" requests should go to the XPages runtime. This looks like it was added for more Social-nonsense reasons, but I'm happy it's there regardless. Better still, when the runtime finds a URL it was told to handle, it queries IServletFactory implementations like those you can use in an NSF for custom servlets. I already had one in place for JAX-RS, so I made another one (refactored since that commit) for JSP.

Once that was in place (plus some other fiddly details), I got to what I wanted: writing JSPs inside an NSF and having them share the XPages class space:

Screenshot of Designer and a browser showing an in-NSF JSP

Next Up: MVC

Once I had JSP in place (and after some troublesome fiddling with JSF), I decided to take a swing at adding my beloved MVC to the stack.

This had its own complications, this time for the inverse problem as JSP. While Jasper is a creature of the early 2000s and uses older, less-flexibile Java APIs that I had to write around, MVC is the opposite. It's a pure child of the modern, CDI-based world and thus does everything through CDI and ServiceLoaders. However, though I've had CDI support in the project for a long time, actually tying together anything to do with CDI or ServiceLoaders in OSGi is eternally difficult, especially on Domino.

Service Loading

I had to wrangle for this for a while, but I eventually came up with a functional-but-odd workaround: I made use of my own custom ServiceParticipant extension capability that lets me have an object perform pre/post behavior around each JAX-RS request in order to futz with the ClassLoader. I had trouble where the NSF ClassLoader didn't find classes from the MVC implementation, though it should have, so I ended up overriding the ClassLoader to first look explicitly there. It's not pretty, but it works and at least it doesn't require filesystem stuff.

Servlets and Request Dispatchers

Another aspect of being a more-modern child than Jasper is that Krazo makes ready use of Servlet capabilities that have been there for a while but which don't exist on Domino.

For example, Krazo uses a ServletContainerInitializer instance to do pre-research in the app to find classes that should get MVC behavior. Without this scan, MVC won't be applied. This is a Servlet 3.0 feature dating to 2009 and Domino doesn't support it - or any kind of annotation-based classpath scanning, for that matter.

Fortunately, I didn't really need to fully support this concept - I really just needed to make sure this ran whenever the JAX-RS support was being loaded for an NSF. So I made it possible to contribute these via an extension point and added my own scanning implementation to gather the applicable types. Essentially, a backport of this feature to apply in an NSF. With that in place, I was able to register the initializer and have it do its work.

My next hurdle was to do with the way Krazo delegates to JSPs. Specifically, it queries the ServletContext (essentially, the app container) for Servlet registrations that can handle the desired extensions (".jsp" and ".jspx" here) and routes to that using a RequestDispatcher. Well, Domino supports none of this. Trying to get a RequestDispatcher is hard-coded to throw an exception saying "Domino doesn't support this" and the bit about getting ServletRegistrations was new in 3.0. Originally, I stubbed these out, but I decided to give a swing at backporting this as well.

While an NSF doesn't have "Servlet registrations" as such, it does have a list of the aforementioned IServletFactory instances, so I decided to write my own. I wrote a getRequestDispatcher implementation that queries the current module's Servlet factories for a match and, when found, return a basic implementation. Then, I wrote a custom subtype of IServletFactory to provide additional information and made use of that to emulate the Servlet 3+ behavior, at least well enough to let Krazo do what it needs.

Seeing It Together

Once I figured out all these hurdles, I got to what I wanted: I can make a JAX-RS service in an NSF that acts as an MVC controller:

Screenshot of Designer and a terminal showing an MVC controller in an NSF

Neat! There are still some rough edges to clean, but it's great to see in action.

Conclusion and Next Steps

So why is this good? Well, there's a certain amount of box-checking going on: the more JEE specs I can get going, the better.

But beyond that, this is helping to crystallize some of my thinking about what Domino (web) developers are even supposed to freaking do nowadays. This remains an extremely-vexing problem, but I know the answer isn't XPages as it exists now. Maybe the answer is to move XPages to a better container or maybe it's to add a better container to Domino (or both of those, I guess). This is another option, one that preserves the "just fire up Designer and edit some code" niceties of the XPages experience while gaining better, more modern capabilities. I could see writing an app with this, doing all my work in CDI beans and using JSP as the front end - pure open-source solutions with active developers - all inside the NSF. Is it the real best answer? I don't know. Maybe. It's something, though, and specifically something worth trying.

Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue

Dec 14, 2021, 4:41 PM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

I think it's been a little while since I talked about the XPages Jakarta EE Support project of mine. The goal of that is sort of the inverse of the XPages Runtime project: rather than bringing XPages to a proper modern app server, the JEE Support project brings a handful of current Jakarta EE specs to XPages. It started out a few years ago as a sort of proof-of-concept, but I've since been using it for client work to do things like use newer Jakarta REST (n?e JAX-RS), CDI, and newer EL in XPages and OSGi bundles.

The Specification Move

Originally, I targeted a set of specifications from Java/Jakarta EE 8. Some of these were new to Domino outright, while some (such as JAX-RS) existed in the XPages stack already but in very old forms. I implemented those and for a good while just used the project as a source of parts for client work, tweaking it here and there as needed.

However, the long-prophesized package-name switch from javax.* to jakarta.* came to fruition in Jakarta EE 9, released a bit over a year ago. In the intervening year, most implementations of the specs made the switch, and the versions I was using started to show their age (for example, I was using RESTEasy 3, which was already old when I adopted it, and it's going to 6 now). Beyond just the philosophical sadness of my project being behind, I started to grow specific needs to upgrade components: we switched to JSON-B a while ago, but then some new bug fixes in Yasson were coming only to post-jakarta.* builds.

The Initial Work

I first gave a shot to this in August, initially planning to move only JSON-P and JSON-B over to the new namespace. However, I quickly hit the limits of that, since a lot of these specs are interdependent. JAX-RS using JSON-P and JSON-B to emit JSON content, Yasson has some ties to CDI, and so forth. I realized that it was going to have to be all-or-nothing.

So I rolled up my sleeves and assessed the task ahead of me. At a basic level, there was the job of updating my dependencies, which immediately had some good aspects and bad aspects:

  • Previously, I was using a hodgepodge of spec packages like the JBoss bundling of JAX-RS in order to get something that would work and be license-friendly. Now that it was all over at Eclipse, I could switch to the nice, clean official versions and have no license worries.
  • I also used to have all sorts of OSGi rule overrides to account for Domino-specific oddities like ancient versions of various specs being supplied by the default classpath or other, conflicting bundles, all with no versioning. Once I was looking for e.g. jakarta.annotation instead of javax.annotation, I was no longer bound to that particular nightmare.
  • Not all of my dependencies were ready. When I first started, RESTEasy (my JAX-RS provider of choice) had not yet uploaded a JEE-9-compatible version. My main choices were to try using Eclipse Transformer, which would add a whole new layer to the task, or to switch to another provider.

Then there's the elephant in the room: the freaking Servlet API, which much of this depends on. Since the Servlet API is the job of the web container, I can't realistically upgrade it. Fortunately, that's only half true: I can't give it new capabilities (like Web Sockets), but I can wrap the old stuff with the new. And, like the other specs, the switch of the package name was a tremendous blessing, allowing me to deploy the official Servlet 5 API unchanged. Then, I did the tedious work of writing a slew of adapter classes, half wrapping a javax.servlet component and pretending it's jakarta.servlet and half going the other direction. Since the methods are either direct analogs, optional features, or can be emulated, this was actually much easier than I thought it would be. And there: Servlet 5 on Domino! Kind of!

The Showstopper

However, I soon hit what seemed to be a show-stopper: a LinkageError problem when using CDI that didn't show up previously. My search for the topic found only one hit: an issue in Open Liberty referencing almost exactly the same problem. My heart sank when I read that their fix was to upgrade the Equinox runtime - something that's outside my powers on Domino (probably).

So, disheartened, I set it aside for a couple months. I figured there was a small chance that Weld (the CDI implementation at the heart of the trouble) would put out an update that fixed it - after all, an older version worked.

Resuming Work

After setting it aside, it kept eating away at the back of my mind, and two things kept pushing me to go back to it:

  • I'll need to do it eventually. I (and my client projects) can't just be stuck at the old style forever.
  • I really didn't want to admit defeat and switch back to Gson for JSON processing.

So I went back to it. My initial hope - that a new version of Weld would magically fix the problem - proved to not have come to fruition. Still, though, I wasn't sure that it was the exact same problem Liberty encountered. For one, my use of CDI studiously avoids actually telling it about OSGi, since I've had little luck making use of that with Domino's OSGi stack. That was enough cause to make me think I could work around it.

And work around it I did! The trouble turned out to be, unsurprisingly, a bit esoteric, but boiled down to the runtime re-registering proxy classes for the same core components. My guess is that, somewhere along the line, Weld changed some sort of internal cache in a way that would break when using a bunch of ephemeral per-NSF containers as I do. I implemented my own (since it's an intended extension point) and added a bit of a cache, and I was back to the races.

As a convenient blessing, RESTEasy released 6.0.0.Beta1 just days before I got back to it, a major release targeted at JEE 9. That meant that I could save a ton of work by not having to re-work everything for another JAX-RS implementation. I had been looking into Jersey, which I'm sure would have done the job, but it's fiddly work trying to put all these pieces together on Domino, and I was all the happier to not have to re-do it all.

JavaMail

But then I hit a new problem: the javax.mail API, now jakarta.mail. The first part of this is easy enough: bring in the new spec bundle and everything will point to it. Great! I hit an immediate problem, and one I had been dreading dealing with. Though the spec changed package names, the implementation didn't. That brought me face-to-face again with an old nemesis of mine, sitting there in Domino's classpath, corrupting it:

A screenshot of Domino's ndext directory

The way the Mail API works is that there's a file, called "mailcap", that lists implementations for common data types, like:

1
2
3
4
text/plain;;		x-java-content-handler=com.sun.mail.handlers.text_plain
text/html;;		x-java-content-handler=com.sun.mail.handlers.text_html
text/xml;;		x-java-content-handler=com.sun.mail.handlers.text_xml
multipart/*;;		x-java-content-handler=com.sun.mail.handlers.multipart_mixed; x-java-fallback-entry=true

So, while all the entrypoint classes are jakarta.mail.* now, the implementations remain com.sun.mail.*, all with the same class names. And, since this little jerk of a JAR is sitting in the system classpath, it has a way of showing up all the time, complaining that com.sun.mail.handlers.text_plain is incompatible with jakarta.activation.DataContentHandler.

This is extremely fiddly to deal with, potentially involving writing a special ClassLoader implementation that blocks calls down to the lower-level JAR. While maybe possible, I'm not sure it'd be possible in a way that would be practical for normal use in an app.

And so, with a heavy heart, I forked the thing and added an "org.openntf" in front of all the package names. And that... works! It works just fine. It means that I'm on the hook for manually integrating any upstream changes, but at least it works without having to worry about any conflicts.

That wasn't the end of my trouble with this spec, though. The spec package itself, in jakarta.mail.Session uses ServiceLoader to look for services, and it uses it in the form that looks them up with the current thread's ClassLoader. Because I'm working in OSGi, that ClassLoader - the XPage app's loader - won't know about the implementation classes directly, and this call fails. And, while there's a whole sub-spec in OSGi for dealing with this, I've never had success actually getting it working in Domino.

So I forked that freaking thing too and modified the calls to use its own ClassLoader, which could find the implementation by way of it being a fragment bundle attached to it.

And, with that, finally, I had Jakarta Mail properly hooked up and working without having to jump through too many hoops. I'd still prefer to not have forked the source, but it was the best of a bad lot of choices.

The Final Tally

That brings the specs updated/added in this project to:

  • Expression Language 4.0
  • Contexts and Dependency Injection 3.0
    • Annotations 2.0
    • Interceptors 2.0
    • Dependency Injection 2.0
  • RESTful Web Services (JAX-RS) 3.0
  • Bean Validation 3.0
  • JSON Processing 2.0
  • JSON Binding 2.0
  • XML Binding 3.0
  • Mail 2.1
    • Activation 2.1

Not too shabby, if I say so myself. Technically, Servlet 5.0 is in there, but it doesn't actually bring any newer-than-2.4 powers to the Servlet container, so it's really just infrastructural details.

Now I'll just have the work of updating my client project and finally getting to use whatever that Yasson bug fix was that prompted this in the first place.

Java's Shakier Old APIs

Dec 10, 2021, 11:24 AM

Tags: java

In my last post, I sang the praises of InputStream and OutputStream: two classes from Java 1 that, while not perfect, remain tremendously useful and used everywhere.

Then, a tweet by John Curtis got me thinking about the opposite cases: APIs from the early Java days that are still with us, are still used relatively frequently, but which are best avoided or used very sparingly.

There are a handful of APIs from the early days that may or may not still exist, but which aren't regularly encountered in most of our work: the Applet API, for example, was only recently actually removed, and it was clear for a long time that it wasn't something to use. Some other APIs are more insidious, though. They're right there alongside newer counterparts, and they're not marked as @Deprecated, so you just have to kind of magically know why you shouldn't use them.

Old Collections

One of these troublesome holdovers is a "freebie" for Domino developers: java.util.Vector. This is paired with other "first revision" collection classes like Hashtable, classes that predate the Collections Framework in 1.2 and which were retrofitted into it.

These classes aren't incorrect as such: they do what they're supposed to do and function as working implementations of List and Map. The trouble comes in that they're sub-optimal compared to other options. In particular, they're very-heavily synchronized in a way that hurts performance in the normal cases and isn't even really ideal in the complex multi-threaded case.

Unfortunately, since these classes aren't deprecated, an IDE would only warn you about it if it's using some stylistic validation above normal compilation. Such classes are identified best by looking for a warning paragraph like this at the bottom of their Javadoc:

Javadoc 'old class' warning for Hashtable

java.util.Date

The java.util.Date class has a simple concept: represent a point in time. However, it's a neverending font of limitations and caveats:

  • It's essentially a wrapper for a Unix timestamp in milliseconds precision, and doesn't get more precise
  • It's not immutable even though it'd make sense to be. Effective Java includes repeated examples of why this is bad
  • Though it's called "Date", it's always a single timestamp, and can't represent a day in the abstract
  • In Java 1, it also was responsible for parsing date strings, and this functionality remains (though at least deprecated)
  • As mentioned in the prompting tweet, the DateFormat classes that go with this are not thread-safe, even though one could reasonably assume they would be based on their job
  • There's no concept of time zone, though the string representation would lead one to think there might be
  • The related Calendar class is a little more structured, but in a weird way and having a lot of the same limitations

Nonetheless, Date is the obvious go-to for date/time-related operations due to its age and alluring name. And, in fact, it wasn't even until Java 8 that there was a first-party better option. That's when Java basically adopted Joda Time outright and brought it into Java as the java.time package. This system has what's required: the notion of dates and times as separate entities, time zones both as named entities (like "America/New_York") and just as offsets (like UTC-5:00), and full immutability and thread-safety, and tons else.

Unfortunately, it will be a long time for old habits to die and longer for older code to fade away, so we're stuck with Date for a while, even if only to always call #toInstant on it.

java.io.File

The java.io.File class is kind of similar to Date: it was created in Java 1 as a basic way to work with files on the filesystem. It still does that, and (as far as I know) it's not as outright bad as the above, but it's limited and non-optimal.

In Java 7, the NIO Path API was added, which replaces File in a more-generic and -adaptable way. Whereas File refers specifically to the filesystem, the Path API is adaptable to whatever you'd like while sharing the same semantics. It can also participate in the NIO ecosystem properly.

Much like how Date has a #toInstant method, File has a #toPath method to work with the transition. I make a habit of doing this almost all the time when I'm working with existing code that still uses File. And there is... a lot of this code. Even APIs that can take Path arguments will potentially turn them into Files internally to keep working with their older implementation.

There are also a bunch of related APIs where the replacements exist but aren't quite as straightforward. ZipFile is a perfect example of this: it (and its child class JarFile) has constructors that take either a File or a String representing a file path, and that's alarming. However, the ZIP File System Provider that works with Paths is neat, but it's not as clear of a replacement for ZipFile as Path is for File. That's actually one of the reasons I use ZipInputStream even in a case where ZipFile would also work.

Conclusion

I'm sure there are other similar traps around, but those are the main ones I can think of off the top of my head. It's a bit of a shame that Sun/Oracle have been so historically reticent to mark classes wholesale as deprecated. While IDEs and and toolchains have gotten better at providing "stylistic" recommendations like this, it's been slow going, and it's not universal. The best thing you can do for now is to just know about the newer alternatives and use them enough that the old kinds immediately read as "code smell" when you come across them.

Generating Archive Files On The Fly In Java

Dec 9, 2021, 10:30 AM

Tags: java liberty

When working on version 3.0 of the Domino Open Liberty Runtime, I had occasion to do something I've done in other situations, but it occurred to me that it'd make a good post on its own. Specifically, part of one of the new features involved creating archive data on the fly, purely in-memory, and that's something that comes in handy quite a bit.

Background: The Task

The task at hand in that project involved the way the runtime will deploy custom extension features for Liberty when creating the server. There are a few of these, all centered around adding integration with Domino in one way or another. For the previously-existing Liberty features, this was done in three parts:

  • The actual Liberty extension code, which is a Java project that produces a Liberty-compatible OSGi bundle.
  • A "subsystem" module, which is a code-less Maven project that uses esa-maven-plugin to embed the above bundle and generate a "SUBSYSTEM.MF" file to describe it. This ESA/subsystem bit is a mechanism for distributing packaged features from the OSGi spec.
  • A "deployment" module, which is a small Java project that provides an extension for the Domino-side runtime to house and deploy the above ESA file.

For 3.0, I wanted to make a feature that would provide Notes.jar and the NAPI to application. Since those files are proprietary and non-distributable, I couldn't include them in the actual runtime distribution and would instead have to look them up from Domino's environment at runtime. Additionally, since all I wanted to do was provide the existing API and not add any new code, there was no particular need to make a code project like the first one above.

More Background: Java Streams

The way these extensions are registered to Domino is by classes that provide some metadata about the feature and then a method called getEsaData() that returns an InputStream. Though InputStreams aren't the only way to represent arbitrary binary blobs like this, they're used everywhere by virtue of them arriving with Java 1, and they're extremely adaptable.

Basically, the idea of an InputStream is that it's just a mechanism to read a sequence of bytes from somewhere. In Domino terms, they're like NotesStream, but good.

Their utility comes from their simplicity and adaptability. Because the abstract class only deals with reading bytes and a few operations for skipping around, they can be used for all sorts of things. The prototypical use is for reading a file. For example:

1
2
3
4
Path someFile = Paths.get("/foo/bar/baz.txt");
try(InputStream is = Files.newInputStream(someFile)) {
  // read file data from the stream
}

They're not limited to that, though: the JDK comes with all sorts of InputStream variants like ByteArrayInputStream, which lets you read from a byte[] in memory.

In addition to being arbitrary as to where the byes are coming from, streams are also very composable. Many types of streams either must or may wrap an existing stream to alter it in some way. One of the more-common cases where you'd do this is when reading ZIP file data. Taking something similar to above:

1
2
3
4
5
6
7
8
Path someZipFile = Paths.get("/foo/bar/baz.zip");
try(
  InputStream is = Files.newInputStream(someFile);
  ZipInputStream zis = new ZipInputStream(is, StandardCharsets.UTF_8)
) {
  ZipEntry entry = zis.getFirstEntry();
  // work with the ZIP entries
}

The thing to note here is that, while this happens to be coming from a ZIP file on disk, it doesn't have to be: that first is could just as easily be a stream coming from HttpURLConnection or a ByteArrayInputStream.

Along with InputStream, Java also has OutputStream. Luckily, OutputStream is similarly simply designed, and has uses that are a direct mirror for everything above: there exist ByteArrayOutputStream, ZipOutputStream, and all sorts of others.

Putting It Together

Back to the original goal, my task was to create a class that would provide an InputStream containing ESA data - that is to say, a ZIP file - to the runtime, which could then deploy it as a Liberty feature. The previous extensions did this by embedding the ESA in their JAR and then returning an InputStream to that. Now, though, I wanted to do it all dynamically.

Now, I talked a big game above about how streams didn't have to have anything to do with files, and it could all be done in memory. That's still all true, but technically here I ended up using files for caching purposes. The above is still good to know, though!

So anyway, my goal was to deliver an InputStream to the runtime that represented an ESA that looks like this:

Contents of a generated ESA file

Of those entries, "corba.jar" is the CORBA API from Maven Central to make Notes.jar work on Java 9+, while "Notes.jar" comes from jvm/lib/ext and "lwpd.commons.jar" and "lwpd.domino.napi.jar" come from the OSGi framework in the running Domino server. The remaining entries - the two "MF" files and the embedded JAR - are composed on the fly.

The starting point here is that I identify a cache location within my working directory based on the current Domino build number, and I name that path out. Then, I open it up as a ZIP to fill with contents, like above:

1
2
3
4
5
6
try(
  OutputStream os = Files.newOutputStream(out, StandardOpenOption.CREATE);
 ZipOutputStream zos = new ZipOutputStream(os, StandardCharsets.UTF_8)
) {
  // work happens here
}
SUBSYSTEM.MF

The next part is to build the "SUBSYSTEM.MF" file. As implied by the extension, this file has the same syntax as "MANIFEST.MF" files, and so I can use the java.util.zip.Manifest class to handle encoding and formatting. I start out by loading a template from the current bundle's resources:

1
2
3
4
Manifest subsystem;
try(InputStream is = getClass().getResourceAsStream("/subsystem-template.mf")) { //$NON-NLS-1$
  subsystem = new Manifest(is);
}

There, I'm using the constructor from Manifest that reads from an existing stream. Often, that would be reading a "MANIFEST.MF" from an existing JAR, but it'll work with any stream.

Then, I fill it in with some details, with lines like:

1
2
3
4
5
Attributes attrs = subsystem.getMainAttributes();
attrs.putValue("Subsystem-Name", getShortName()); //$NON-NLS-1$
String featureName = getShortName() + "-" + getFeatureVersion(); //$NON-NLS-1$
attrs.putValue("IBM-ShortName", featureName); //$NON-NLS-1$
// etc.

Finally, I create an entry in the ZipOutputStream and write the contents. The way ZipOutputStream works is that its "stream-iness" counts towards whatever the most-recently-added entry is.

1
2
zos.putNextEntry(new ZipEntry("OSGI-INF/SUBSYSTEM.MF")); //$NON-NLS-1$
subsystem.write(zos);
Embedded Bundle

Alright, so far, so good. Up until now, this is the "normal" case for working with ZIP files, where you make a new entry and pour in some text data. What's neat, though, is that the encapsulation capabilities of these streams can be stacked, which is what comes up next.

Specifically, I wanted to put a ZIP file (the .jar) within this surrounding ZIP (the .esa). The way this is done is by just composing the same tools we've been working with again. Here, esa is what zos was above: the outermost package ZIP contents. I just renamed it in this method for clarity inside the code itself.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// This will be a shell bundle that in turn embeds the API JARs
esa.putNextEntry(new ZipEntry(BUNDLE_NAME + "_" + dominoVersion + ".0.0.jar")); //$NON-NLS-1$ //$NON-NLS-2$
		
// Build the embedded JAR contents
try(ZipOutputStream zos = new ZipOutputStream(esa, StandardCharsets.UTF_8)) {
  Manifest manifest;
  try(InputStream is = getClass().getResourceAsStream("/manifest-template.mf")) { //$NON-NLS-1$
    manifest = new Manifest(is);
  }
  
  // Finish manifest
  // More work here
}

So there, I'm doing basically the same thing as I did originally, to make a ZipOutputStream. Since the ZipOutputStream really doesn't care what the stream is writing to is, it works just as well when writing to a file as when writing to another ZIP stream - the cascading streams handle their own encoding and it works out in the end.

Once I write the manifest, I can make use of the Files utility class to embed each of the JARs from the filesystem:

1
2
3
4
for(Path jar : embeds) {
  zos.putNextEntry(new ZipEntry(jar.getFileName().toString()));
  Files.copy(jar, zos);
}

Finally, I download the CORBA JAR on the fly, so for that one I use a utility function to download from the remote URL:

1
2
3
4
5
OpenLibertyUtil.download(new URL(URL_CORBA), is -> {
  zos.putNextEntry(new ZipEntry("corba.jar")); //$NON-NLS-1$
  IOUtils.copy(is, zos);
  return null;
});

Here, I use IOUtils from Apache Commons IO because it's not copying from a filesystem path, but the idea is basically the same, and exactly the same as far as the destination ZIP is concerned.

The Final Result

Once this is all written to the cached file on the filesystem, the final result is just to return a stream from it:

1
return Files.newInputStream(out);

Since the job of this extension class is only to return an InputStream, the consuming code doesn't care that the extension did all this work, as opposed to the other ones that just return a stream of an embedded resource: everything else is the same.

So, all in all, this isn't a groundbreaking new technique, but that's the point: the way these lower-level JDK components work, you get a tremendous amount of flexibility from just a few common parts.

New Adventures in Administration: Docker Compose and One-Touch Setup

Dec 4, 2021, 2:23 PM

Tags: admin docker

As I do from time to time, this weekend I dipped a bit into the more server-admin-focused side of Domino development. This time, I had set out to improve the deployment experience for one of my client's apps. This is the sprawling multi-NSF-plus-OSGi one, and I've long been blessed to not have to worry about actually deploying anything. However, I knew in the back of my head the whole time that it must be fairly time-consuming between installing Domino, getting all the Java code in place, deploying the DBs, and configuring all the documents that associate them.

I had also had a chance this past week to give Docker Compose a swing. Though I'd certainly known of it for a good while, I hadn't actually used it for anything - all my Docker scripting involved really fiddly operations that I ended up using Bash scripts to launch a single container for anyway, so Compose didn't bring so much to the table. However, using it to tie together the process of launching a Postgres server with pre-populated user info and schema scripts whetted my appetite.

So today I set out to tinker with the Domino side of things.

Deploying To The Domino Data Directory

Some parts of this were the same as what I've done before: I wanted to deploy some JARs to jvm/lib/ext for legacy purposes and then drop ".java.policy" into the notes user's home directory. That was accomplished easily enough with some COPY operations in the Dockerfile:

1
2
COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy

What wouldn't be accomplished so easily, though, would be getting files into the data directory: the app's NTFs and the OSGi plugins. This is because of the way the Domino Docker image works, where it deploys the contents of a ZIP to /local/notesdata on launch, in order to let you work properly with mounted volumes. Because of this, I couldn't just copy the files there in the Dockerfile, since it would conflict with the volume mount; however, I still wanted to do this in an automated way.

This was my impetus to switch away from the official Docker images on Flexnet and over to the "community-ish" Domino-on-Docker build script maintained at https://github.com/IBM/domino-docker. This script is generally more feature-rich than the official one, and one feature in particular caught my eye: the ability to add your own ZIP file (or URL, I believe) to deploy to the data directory at first launch.

So I downloaded the repo, build the image, bundled the OSGi plugins and NTFs into a ZIP, and altered my Dockerfile:

1
2
3
4
5
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes data.zip /tmp/

Then, I set the environment variable in my "docker-compose.yaml" file: CustomNotesdataZip=/tmp/data.zip. That worked like a charm.

One-Touch Setup

Next up, I wanted to automate the initial server setup. I knew that Domino had been gaining some automated setup capabilities recently, and that they really came of age in V12. What I hadn't appreciated until today is how much capability this system has. I'd figured it would let you configure the server either as a new domain or additional and to create an admin user, but I hadn't noticed that it also has the ability to declaratively create and modify databases and documents. Looking over the sample file that Daniel Nashed put up, I realized that this would cover essentially all of my remaining needs.

The file there was most of what I needed: other than tweaking the server and user names, the main things I'd want to change in the basic config were to set HTTP_AllowAnonymous/HTTP_SSLAnonymous to "1" and also add a line to set OnBehalfOfInvokerLst to "LocalDomainAdmins" (which allows XPages to run properly).

Then, I got to the meat of the app deployment. That's all done in the $.appConfiguration.databases object near the bottom, and I set out to adding entries to deploy each of the NTFs I'd copied to the data directory, and adding the required documents to tie them together. This also went smoothly.

The Final Scripts

The final form of the setup is pretty clean. The Dockerfile is very similar to the above, with just an added line to copy in the config file:

1
2
3
4
5
6
FROM hclcom/domino:12.0.0

COPY program/ /opt/hcl/domino/notes/latest/linux/
COPY --chown=notes java.policy /local/notes/.java.policy
COPY --chown=notes domino-config.json /tmp/
COPY --chown=notes data.zip /tmp/

The docker-compose.yaml file is longer, but I think pretty explicable. It maps the ports, sets up some volumes for the various persistent-data portions of Domino, and configures the environment variables for setup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
services:
  clientapp:
    build: .
    ports:
      - "1352:1352"
      - "80:80"
      - "443:443"
    volumes:
      - data:/local/notesdata
      - ft:/local/ft
      - nif:/local/nif
      - translog:/local/translog
      - daos:/local/daos
    restart: always
    environment:
      - LANG=en_US.UTF-8
      - CustomNotesdataZip=/tmp/data.zip
      - SetupAutoConfigure=1
      - SetupAutoConfigureParams=/tmp/domino-config.json
volumes:
  data: {}
  ft: {}
  nif: {}
  translog: {}
  daos: {}

Miscellaneous Notes

In doing this, I came across a few things that are worth noting for anyone diving into it clean as I was:

  • Docker Compose (intentionally) doesn't rebuild images on docker compose up when they already exist. Since I was changing all sorts of stuff, I switched to docker compose build && docker compose up.
  • Errors during server autoconfig don't show up in the console output from docker compose up: if the server doesn't come up like you expect, check in "/local/notesdata/IBM_TECHNICAL_SUPPORT/autoconfigure.log". It's easy for a problem to gum up the whole works, such as when using "computeWithForm": true on a created document throws an exception.
  • Daniel's example autoconf file above places the admin user ID in "/local/notesdata/domino/html/admin.id", so it will be accessible via http://servername/admin.id after the server comes up. Alternatively, you could snag it by copying it via Docker commands.
  • This really drives home the desperate need for a full web-based admin app for Domino.

All in all, this was a delight to work with. Next, I should be able to make a script that generates the config JSON for me based on all the app's NTFs, and then include that whole thing as part of the Maven build in a distribution ZIP. That will be pretty neat.

Version 3.0 of the Domino Open Liberty Runtime

Dec 2, 2021, 4:46 PM

When I last talked about my Domino Open Liberty Runtime project, I mentioned the various main tasks I'd been doing in gearing up for a 3.0 release.

Well, it's been a while since then, but so it goes with open-source projects sometimes. Fortunately, I've had some time here and there to dust it off some more and take care of a lot of lingering tasks I had assigned to the 3.0 release, enough to make it to the finish line. Some of these I found interesting to note, and so I'll note them here.

Java Runtime Shifts

One of the neat tricks the project does is to auto-download a specified Java runtime for you: you can pick your version (e.g. 8, 11, or 17) and your flavor (HotSpot, OpenJ9, or GraalVM CD) and the tooling will do the work of downloading it on your behalf.

Originally, this used just AdoptOpenJDK, the previous go-to home of open-source Java builds. When I added GraalVM, I generalized the code to handle those downloads as well. Like AdoptOpenJDK's binaries, they're hosted on GitHub, but under a different organization and using a different naming scheme.

That refactoring paid off now: AdoptOpenJDK moved to the Eclipse Foundation and became the Adoptium project: much the same idea, but a new home. During that move, though, IBM stopped pushing OpenJ9 builds to that organization and instead now host them themselves, under the name Semeru. I assume this was all due to some wrangling over the use of "JDK" and other political scuffling, but either way I had to deal with it.

Fortunately, though the Semeru home page is standard IBM fare, the binaries themselves are still hosted on GitHub, and the organization still mirrors the original AdoptOpenJDK/Adoptium style. So I made another class for that, but the logic is largely the same.

Notes API Feature

Since it's extremely likely that apps deployed adjacent to Domino this way will want to access Domino data, I originally set up deployment to copy the server's Notes.jar into the global "extra JARs" directory for deployed Liberty instances. This would allow apps to make use of lotus.domino classes without having to package Notes.jar inside the WAR.

This works well enough, but it always felt a little wrong: I don't like polluting the server's classpath that way, and it got all the worse because I needed to also bring in an open-source copy of CORBA classes for cases where the JVM in use is newer than 8.

So I decided to take another swing at it, this time packaging it up as a feature to be loaded as desired in Liberty. The project already has a few of these, generally consisting of the Liberty-specific OSGi bundle, the ESA feature file to define it in Liberty, and then an extension on Domino to deploy that to new servers.

In this case, I'd want to do it a little differently: the other ones included custom code on my part, but this one will just want to grab JARs from the Domino installation and make them available to apps. So, for this one, I didn't need the first two components - instead, I'd build the feature JARs on the fly. I went about doing that, the the result is a class that creates a notesApi-1.0 feature automatically. It makes use of the way streams work in Java to automatically build a feature container based on the current Domino build version (in case the API changes), creating the necessary manifests programmatically and then locating Notes.jar, the NAPI, and their CORBA and IBM Commons dependencies.

This new route is nicer all around: it's explicitly opt-in, which I like, it now also provides the NAPI, and it hides the non-API dependencies from the apps.

Generic Tooling

For a while now, I've been toying with the idea of making this less Open-Liberty-specific and adding the ability to deploy different kinds of apps. After all, all Liberty is here is a forked process, and that could be anything: a different app server, a single JAR file, or something not Java-related at all. Though the tooling will still only support Open Liberty in 3.0, I've been gradually reorganizing the structure to make such a change practical to implement in, say, 4.0.

Rather than the core runtime assuming it's using Liberty, now it uses generic interfaces to represent any kind of server, and then that further drills down into servers that use a JVM, and from there down to a Liberty server.

This sort of genericizing has made the runtime much more service-based internally - it's more indirect and it takes more discipline, but I feel good about it. I picture it now as a central core coordinator (still named OpenLibertyRuntime internally) receiving notifications of changes in the admin NSF and then sending out appropriate messages to the various deployment services. The fact that there's only one such deployment service currently doesn't diminish the splendidness of my mental model.

Reverse Proxy

The reverse proxy implementations that I'd mentioned before haven't really changed much since I introduced them, but that's, I think, a good sign: they do what they're supposed to and don't necessarily need dramatic changes. The Domino-side reverse proxy - the one that lets your Liberty apps look like they're running inside Domino's normal HTTP server - has risen in my estimation since I made it, since it makes for a very-reasonable story. It's essentially like if you use the Equinox extension points to deploy a servlet or web application, except you trade the ability to run within a database context for a much-more-modern development environment.

Release

The full list of closed issues for this release is available on GitHub, and I've uploaded the distribution to OpenNTF.