Putting Eclipse Transformer To Use In Dependency Wrangling

May 24, 2022, 3:46 PM

Tags: jakartaee java

Setting code aside, the backbone of the XPages Jakarta EE Support project is its dependency pool. In it, I use my fork of the p2-maven-plugin to wrangle all the spec and implementation dependencies. Aside from just collecting them, this file does a ton of work to create and reconfigure their OSGi bundle rules to get everything working on Domino.

There have been limitations, though, and some of them have to do with the Jakarta NoSQL project. Though there are side branches of that project using the jakarta.* namespace, the main master branch is still on javax.* for a couple Jakarta depenencies. Historically, I've dealt with this by running a build locally and deploying it to OpenNTF's Maven server. However, this adds a bit of randomness to the mix: if a snapshot build of NoSQL goes out to the main repository that happens to be newer, then building the dependency repository locally might pick up on that instead, since it's named the same thing.

Transformer

Fortunately, IBM wrote the solution for me: Eclipse Transformer. This Transformer is a rules engine to translate files (Java and related resources, namely) based on configuration - and, while it's generic, it's really designed for the transition from javax.* to jakarta.* namespaces.

It allows you to do these transformations at runtime or (as I'll be doing here) ahead of time, even if you don't have access to the original source. Though I do have access to the source, it's more useful at the moment to act like I don't.

I'd known about the tool and have seen how it's used heavily by both app servers and implementation vendors to be able to support both old- and new-style uses, and so I've kept it in mind for in case the need ever came up. It's a perfect fit for this.

p2-maven-plugin

I considered a couple ways to handle this, but realized the cleanest for now would be to integrate it into the dependency pool generator that I already have, since it fits right in with the OSGi transformations I'm doing.

So I went on over to the p2-maven-plugin fork and got to work. When defining Maven artifacts to bring in, the format looks like this:

1
2
3
4
<artfiact>
    <id>jakarta.servlet:jakarta.servlet-api:4.0.4</id>
    <source>true</source>
</artfiact>

Now, Servlet already has a jakarta.* version, but it'll be useful here as an example that avoids the other transformations I'm doing.

My addition is to add a transform configuration option here, with jakarta as the only value for now:

1
2
3
4
5
<artfiact>
    <id>jakarta.servlet:jakarta.servlet-api:4.0.4</id>
    <source>true</source>
    <transform>jakarta</transform>
</artfiact>

...and that'll be it! When that is specified, the code will now run the artifact and its source JAR transparently through Transformer and the version you get in your p2 repository will reflect the transition. And, well, it works perfectly in my case. The resultant NoSQL spec and dependencies are functionally equivalent to the ones in the jakarta.* source branch, but without having to actually change the source files yet. Neat.

Implementation

Though it took a bit to track down the best way to do it, it turned out that Transformer is quite easy to embed into a Java app like the Maven plugin. The majority of the code ends up being effectively Java boilerplate to provide the default values for Jakarta transformation. Truncated, it looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
String inputFileName = t.getAbsolutePath(); // the artifact in ~/.m2/repository
File dest = File.createTempFile(t.getName(), ".jar"); //$NON-NLS-1$
String outputFileName = dest.getAbsolutePath();

Map<String, String> optionDefaults = JakartaTransform.getOptionDefaults();
Function<String, URL> ruleLoader = JakartaTransform.getRuleLoader();
TransformOptions options = /* build TransformOptions object that reads the above variables */

Transformer transformer = new Transformer(logger, options);
ResultCode result = transformer.run();
switch(result) {
case ARGS_ERROR_RC:
case FILE_TYPE_ERROR_RC:
case RULES_ERROR_RC:
case TRANSFORM_ERROR_RC:
	throw new IllegalStateException("Received unexpected result from transformer: " + result);
case SUCCESS_RC:
default:
	return dest;
}

There are plenty of options to specify, but that's really about it. Once given the Jakarta defaults, it will do the right thing in the normal case, both for the compiled class files as well as the source JAR.

I'm not sure if I'll need it in other cases (NoSQL will move over in the main branch eventually), but it's sure handy here and should be useful in a pinch. From time to time, I've run across dependencies that would be useful to include but use old JEE specs, and this could do the trick in those cases too.

Poking Around With JavaSapi

May 19, 2022, 4:49 PM

Tags: dsapi java

Earlier this morning, Serdar Basegmez and Karsten Lehmann had a chat on Twitter regarding the desire for OAuth on Domino and their recollections of a not-quite-shipped technology from a decade ago going by the name "JSAPI".

Seeing this chat go by reminded me of some stuff I saw when I was researching the Domino HTTP Java entrypoint last year. Specifically, these guys, which have been sitting there since at least 9.0.1:

JavaSapi class files in com.ibm.domino.xsp.bridge.http

I'd made note of them at the time, since there's a lot of tantalizing stuff in there, but had put them back on the shelf when I found that they seemed to be essentially inert at runtime. For all of IBM's engineering virtues (and there are many), they were never very good at cleaning up their half-implemented experiments when it came time to ship, and I figured this was more of the same.

What This Is

Well, first and foremost, it is essentially a non-published experiment: I see no reference to these classes or how to enable them anywhere, and so everything within these packages should be considered essentially radioactive. While they're proving to be quite functional in practice, it's entirely possible - even likely - that the bridge to this side of thing is full of memory leaks and potential severe bugs. Journey at your own risk and absolutely don't put this in production. I mean that even more in this case than my usual wink-and-nod "not for production" coyness.

Anyway, this is the stuff Serdar and Karsten were talking about, named "JavaSapi" in practice. It's a Java equivalent to DSAPI, the API you can hook into with native libraries to perform low-level alterations to requests. DSAPI is neat, but it's onerous to use: you have to compile down to a native library, target each architecture you plan to run on, deploy that to each server, and enable it in the web site config. There's a reason not a lot of people use it.

Our new friend JavaSapi here provides the same sorts of capabilities (rewriting URLs, intercepting requests, allowing for arbitrary user authentication (more on this later), and so forth) but in a friendlier environment. It's not just that it's Java, either: JavaSapi runs in the full OSGi environment provided by HTTP, which means it swims in the same pool as XPages and all of your custom libraries. That has implications.

How To Use It

By default, it's just a bunch of classes sitting there, but the hook down to the core level (in libhttpstack.so) remains, and it can be enabled like so:

set config HTTP_ENABLE_JAVASAPI=1

(strings is a useful tool)

Once that's enabled, you should start seeing a line like this on HTTP start:

[01C0:0002-1ADC] 05/19/2022 03:37:17 PM  HTTP Server: JavaSapi Initialized

Now, there's a notable limitation here: the JavaSapi environment isn't intended to be arbitrarily extensible, and it's hard-coded to only know about one service by default. That service is interesting - it's an OAuth 2 provider of undetermined capability - but it's not the subject of this post. The good news is that Java is quite malleable, so it's not too difficult to shim in your own handlers by writing to the services instance variable of the shared JavaSapiEnvironment instance (which you might have to construct if it's not present).

Once you have that hook, it's just a matter of writing a JavaSapiService instance. This abstract class provides fairly-pleasant hooks for the triggers that DSAPI has, and nicely wraps requests and responses in Servlet-alike objects.

Unlike Servlet objects, though, you can set a bunch of stuff on these objects, subject to the same timing and pre-filtering rules you'd have in DSAPI. For example, in the #rawRequest method, you can add or overwrite headers from the incoming request before they get to any other code:

1
2
3
4
5
public int rawRequest(IJavaSapiHttpContextAdapter context) {
    context.getRequest().setRequestHeader("Host", "foo-bar.galaxia");
        
    return HTEXTENSION_EVENT_HANDLED;
}

If you want to, you can also handle the entire request outright:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public int rawRequest(IJavaSapiHttpContextAdapter context) {
    if(context.getRequest().getRequestURI().contains("foobar")) {
        context.getResponse().setStatus(299);
        context.getResponse().setHeader("Content-Type", "text/foobar");
        try {
            context.getResponse().getOutputStream().print("hi.");
        } catch (IOException e) {
            e.printStackTrace();
        }
        return HTEXTENSION_REQUEST_PROCESSED;
    }
    
    return HTEXTENSION_EVENT_HANDLED;
}

You probably won't want to, since we're not lacking for options when it comes to responding to web requests in Java, but it's nice to know you can.

You can even respond to tell http foo commands:

1
2
3
4
5
6
7
8
9
public int processConsoleCommand(String[] argv, int argc) {
    if(argc > 0) {
        if("foo".equals(argv[0])) { //$NON-NLS-1$
            System.out.println(getClass().getSimpleName() + " was told " + Arrays.toString(argv));
            return HTEXTENSION_SUCCESS;
        }
    }
    return HTEXTENSION_EVENT_DECLINED;
}

So that's neat.

The fun one, as it usually is, is the #authenticate method. One of the main reasons one might use DSAPI in the first place is to provide your own authentication mechanism. I did it years and years ago, Oracle did it for their middleware, and HCL themselves did it recently for the AppDev Pack's OAuth implementation.

So you can do the same here, like this super-secure implementation:

1
2
3
4
public int authenticate(IJavaSapiHttpContextAdapter context) {
    context.getRequest().setAuthenticatedUserName("CN=Hello From " + getClass().getName(), getClass().getSimpleName());
    return HTEXTENSION_REQUEST_AUTHENTICATED;
}

The cool thing is that this has the same characteristics as DSAPI: if you declare the request authenticated here, it will be fully trusted by the rest of HTTP. That means not just Java - all the classic stuff will trust it too:

Screenshot showing JavaSapi authentication in action

Conclusion

Again: this stuff is even further from supported than the usual components I muck around in, and you shouldn't trust any of it to work more than you can actively observe. The point here isn't that you should actually use this, but more that it's interesting what things you can find floating around the Domino stack.

Were this to be supported, though, it'd be phenomenally useful. One of Domino's stickiest limitations as an app server is the difficulty of extending its authentication schemes. It's always been possible to do so, but DSAPI is usually prohibitively difficult unless you either have a bunch of time on your hands or a strong financial incentive to use it. With something like this, you could toss Apache Shiro in there as a canonical source of user authentication, or maybe add in Soteria - the Jakarta Security implementation - to get per-app authentication.

There's also that OAuth 2 thing floating around in there, which does have a usable extension point, but I think it's fair to assume that it's unfinished.

This is all fun to tinker with, though, and sometimes that's good enough.

Upcoming Sessions With OpenNTF and at Engage

May 11, 2022, 8:48 AM

Tags: engage openntf

This month contains a few presentations and sessions of note for me, and I realized I should compile a list.

To start out with, I'll be the second presenter at next week's OpenNTF webinar, which will be about the Domino One-Touch Setup capability from V12 onward. The first leg of the presentation will feature Roberto Boccadoro covering its use in the standard case of easing the lives of administrators, while my section will cover more developer-centric needs, particularly my use of it as a tool for repeatable integration-test suites of Domino code. This webinar will take place next week, on May 19th, and you can register for it at https://attendee.gotowebinar.com/register/2937214894267353356.

While I won't personally be attending Engage this year, it's shaping up to be a good conference and OpenNTF and Jakarta EE will be well represented (in chronological order):

  • Ro05. Happy Birthday, OpenNTF! Now What? - on Tuesday, Serdar Basegmez (and possibly other OpenNTF directors) will run our traditional community roundtable, where you can hear about what OpenNTF has been up to and provide ideas about where you'd like us to go.
  • De20. Domino apps CI with Docker - also on Tuesday, Martin Pradny will discuss the use of the NSF ODP Tooling project within a Docker container to automate NSF building within a CI infrastructure in a smooth and reliable way.
  • De17. Domino + Jakarta EE = AppDev In Heaven - on Wednesday morning, Daniele Vistalli will cover the XPages Jakarta EE Support project, going over the myriad Jakarta and MicroProfile specifications that it brings to Domino development and showing how you can bring them to bear to fix a lot of long-standing Domino dev limitations.

If you're attending Engage, I definitely suggest you check out all of those.

So Why Jakarta?

Apr 28, 2022, 4:10 PM

Tags: jakartaee java
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

I've spent a lot of time over the last while tinkering with the XPages Jakarta EE Support project in particular and Jakarta technologies in general, and I figured it'd be worth discussing a bit why I like this stack and why I think it's worth putting work into.

There are a couple facets to this, I think. Why is it good on its own? Why is it good as a complement or replacement for XPages? And why is it good compared to the other roads offered for Domino developers?

Quick Aside: Spring and Others

Before I get much further, I should mention early on that this isn't so much about Jakarta as opposed to technologies like Spring. Spring is good! It's similar in concept, both because it started from a JEE-aligned mindset and now because Jakarta and MicroProfile have been adopting a lot of the best concepts. It's kind of a "D&D and Pathfinder" situation. While there are some philosophical differences, and Jakarta is (now) run by an open-source organization as opposed to an individual company, the distinction for our purposes isn't important.

This also goes for some other technologies that could potentially be slotted in for server-based app dev, like Vert.x. Vert.x, for its part, often serves different purposes, and so that discussion is also separate.

Technical Reasons

Going into all the specific things that I think are good about JEE technologies would be quite an ordeal, so I'll stick to summarizing some overarching themes that I appreciate.

Presumably as a sign of my own ever-increasing age, I appreciate the staid nature of many aspects of it. While some of that comes from the near-stagnation the stack suffered from towards the end of Oracle's sole stewardship of it, it's good that things like Servlet have remained consistent in important ways since the very beginning. Some aspects have come and some will soon go, but the main aspects have remained pleasantly consistent because they were designed to be simple and largely adaptable. Servlet has its limitations, but they're limitations that don't generally show up for normal use.

I also quite appreciate how annotation-based most of the specs are. This was a good way of moving away from the original "pile of XML" configuration process of the early versions of Java EE while still retaining introspection abilities. What I mean by that is the ability of programs (like a server or an IDE) to look at a Jakarta app and glean important information without having to actually execute the code. As a point of comparison, take this hypothetical version of a REST server, where you declare endpoints programmatically:

1
2
3
4
public void initServer(ServerConfig config) {
    config.addHandler("/foo", new FooHandler());
    config.addHandler("/foo/bar", new BarHandler());
}

...and then compare that to the annotation-based way of doing it:

1
2
3
4
5
6
7
8
@Path("/foo")
public class FooHandler {
	/* snip */

	@GET
	@Path("bar")
	public Object getBar() { /* ... */ }
}

Both could be functionally the same at runtime, but the latter allows tools to inspect the classes statically to provide summaries and capabilities in the UI in a way that would be technically possible but much more difficult otherwise. This is certainly not unique to Jakarta, but it's an important feature of it nonetheless.

Moreover, I think that the stack is morphing itself nicely into a cleaner, modern form. It's been a rocky process, but a lot of the individual specs are either adapting themselves onto CDI or using it as the baseline. As much as I sang the praises of Servlet in the earlier paragraph, you can write a thoroughly-capable app using CDI and JAX-RS without ever caring about much else beyond a data layer.

This adaptability is also paying off with newer-era work like Quarkus. Quarkus is an intriguing project that combines slices of Jakarta, MicroProfile, and others with the native-compilation capabilities of GraalVM to provide a toolchain that lets you write quite-efficient compiled apps, targeted primarily for Kubernetes deployments where the startup and response time of a single node is very important. This is really solving a lot of problems I don't have, but it's interesting to watch, and to see how these goals feed back into Jakarta with things like CDI Lite.

Jakarta As An XPages Extension Or Successor

XPages was (and is) a fork of a subset of Java EE, with the split happening somewhere just before 2006's Java EE 5. It's a small subset, but you can look down that list of technologies and see a few that remain to this day: Servlet 2.4/2.5, JSF 1.1/1.2 by way of the XPages fork, JavaMail, and a few other miscellaneous packages. DAS and the Extension Library brought in JAX-RS 1.1, so you can add a dash of 2009's Java EE 6 to the pot.

The XPages Jakarta EE Support project started as a mechanism for bringing in a newer JAX-RS version, followed by CDI to replace managed beans and EL 3 to replace XPages's primordial Expression Language support - essentially, as a slowly-growing platform update. In its current form, it brings in a wide slate of technologies, but the fact that it was starting as an extension of an existing ancient Java EE fork made it possible to do this gradually, piece by piece. Really, up until the move to the jakarta.* namespace, it was a process of just glomming compatible parts onto the existing Servlet baseline.

Even after that switch, the historical alignment with the older parts of the stack makes building on comparatively straightforward. That applies both to me as the person doing the adapting the spec implementations and (hopefully) to a developer actually using them. While XPages predated the annotation-heavy push in JEE as well as CDI entirely, a lot of the core concepts are in common, and I expect that it'd be an easier transition from XPages alone to Jakarta EE than, say, classic Domino web dev to XPages was. It certainly was for me, anyway.

Jakarta As A Cultural Match

This topic covers both my general appreciation for a thoroughly-open-source platform and why I specifically like it in relation to other roads open to Domino developers.

Java EE had for a long time been kind of open source: though Sun and then Oracle held the reins, the specs grew to be free to implement, and over time there flourished a slew of compatible servers, many of which are now or have always been fully open-source.

I like this for a lot of reasons. For one, it's just good as a programmer to have source access. Normally, you can just go by the spec, but having full access to the server's source lets you debug thorny problems when you hit an edge case. While closed-source software certainly has its place, there's just a layer of "all else being equal, source access is better".

Beyond being able to see and debug the source, it's valuable that the platform is open source and the implementations I use are as well, and the whole thing is guided by the extremely-established Eclipse Foundation. While a company handing something over to an open-source organization can sometimes just be a way to usher it to a plausibly-deniable death, the activity around Jakarta EE shows that isn't the case here. While Oracle and IBM still tend to naturally top the charts, it has a diverse pool of contributors, and its fate isn't tied to the interests of a single company. As with closed-source software, sometimes being shepherded by a single company has advantages, but it leaves you more exposed to the winds of their financial incentives.

This all contributes to a platform where I can be comfortable writing a bunch of code with the knowledge that, while it may not be good forever, the path will be at least clear. While there's something of an industry for modernizing old Java EE applications (one our old friend Niklas Heidloff is involved in), it's a task shared by a lot of companies and, indeed, a lot of the work is "replace old vendor-specific code with standards-based code". While nothing can truly prevent you from having a pile of obsolete code other than not writing it in the first place, following a path like this that's shared with a broad slice of the industry is a good way to mitigate the trouble.

And I think that paragraph is what a lot of it comes down to for me. As much as I can be, I want to be out of the business of writing code that doesn't have a built-in "plan B". If Domino magically stopped existing tomorrow, code written in this way wouldn't necessarily directly work elsewhere, but it'd be a much shorter journey than for the Notes-client and classic Domino web code I wrote in my early career. And, really, some of it would directly work elsewhere. The stuff that's just describing a REST entrypoint or a page layout? The stuff that describes the interaction between those and an abstracted data layer? That code doesn't care what your server is, and there's a whole ecosystem of servers ready to do the job. That, there, is what makes this worthwhile for me.

Structure of the Domino Web App Container

Feb 25, 2022, 3:01 PM

A while back, I talked about the uses of HttpService in Domino. In that post, I talked about how the various HttpService implementations take a look at incoming URLs, see if they're something that should be handled on the Java layer, and then either handle them or pass them back to the legacy NHTTP code to do its thing. My fiddling with the XPages Jakarta EE project in recent days has gotten me thinking about this layer again, and I think it'll be interesting to expand how how this whole layer works (at least as I understand it).

Along with this post, it might be useful to peruse the slide deck for AD105 from LotusSphere 2011. There's a lot of good stuff in there, and basically nothing has changed in the intervening 11 years.

The Stack (Conceptually)

The Domino HTTP stack looks conceptually something like this:

Diagram of the Domino HTTP stack

This is, setting aside the specifics, pretty similar to how other app servers of various kinds are laid out. There's some bottom layer that handles the actual network connection, some part just above that that handles interpreting the requests as HTTP (optionally bypassed in some cases), and then an orchestrator that manages the actual apps sitting on the top layer and routes requests as appropriate.

The Stacks (Java-wise)

Before I continue, I think it will be useful to have some stack traces to reference back to, to see what they share in common and where they diverge. These three examples - from an XPage request, an Equinox-registered Servlet, and an OSGi-packaged webapp - all cover the part of the stack from the bottom up until where user code comes into play.

XPages (DesignerFacesServlet is what handles serving an XPage):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
     at com.ibm.xsp.webapp.DesignerFacesServlet.service(DesignerFacesServlet.java:103)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule.invokeServlet(ComponentModule.java:600)
     at com.ibm.domino.xsp.module.nsf.NSFComponentModule.invokeServlet(NSFComponentModule.java:1352)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule$AdapterInvoker.invokeServlet(ComponentModule.java:877)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule$ServletInvoker.doService(ComponentModule.java:820)
     at com.ibm.designer.runtime.domino.adapter.ComponentModule.doService(ComponentModule.java:589)
     at com.ibm.domino.xsp.module.nsf.NSFComponentModule.doService(NSFComponentModule.java:1336)
     at com.ibm.domino.xsp.module.nsf.NSFService.doServiceInternal(NSFService.java:725)
     at com.ibm.domino.xsp.module.nsf.NSFService.doService(NSFService.java:515)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:363)
     at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:319)
     at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)


Equinox Servlet (the org.eclipse.equinox.http.registry.servlets extension point):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
	at com.example.SomeServlet.service(SomeServlet.java:104)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
	at org.eclipse.equinox.http.registry.internal.ServletManager$ServletWrapper.service(ServletManager.java:180)
	at org.eclipse.equinox.http.servlet.internal.ServletRegistration.handleRequest(ServletRegistration.java:90)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:111)
	at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:67)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule.invokeServlet(OSGIModule.java:167)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule.access$0(OSGIModule.java:153)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule$1.invokeServlet(OSGIModule.java:134)
	at com.ibm.domino.xsp.adapter.osgi.AbstractOSGIModule.invokeServletWithNotesContext(AbstractOSGIModule.java:181)
	at com.ibm.domino.xsp.adapter.osgi.OSGIModule.doService(OSGIModule.java:128)
	at com.ibm.domino.xsp.adapter.osgi.OSGIService.doService(OSGIService.java:418)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:363)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:319)
	at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)


Web Container (the com.ibm.pvc.webcontainer.application extension point):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
	at ccom.example.ExampleServlet.doGet(ExampleServlet.java:18)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:693)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1661)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:937)
	at com.ibm.pvc.internal.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:85)
	at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:500)
	at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3810)
	at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276)
	at com.ibm.pvc.internal.webcontainer.VirtualHost.handleRequest(VirtualHost.java:143)
	at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:931)
	at com.ibm.pvc.internal.webcontainer.WebContainerBridge.handleRequest(WebContainerBridge.java:25)
	at com.ibm.domino.osgi.core.webContainer.WebApplicationsTracker.doService(WebApplicationsTracker.java:141)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.ibm.domino.xsp.adapter.osgi.webContainer.OSGIWebContainerModule.invokeWebAppContainerService(OSGIWebContainerModule.java:207)
	at com.ibm.domino.xsp.adapter.osgi.webContainer.OSGIWebContainerModule.doService(OSGIWebContainerModule.java:178)
	at com.ibm.domino.xsp.adapter.osgi.OSGIService.doService(OSGIService.java:418)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:363)
	at com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:319)
	at com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)

Each one of these has some intriguing lines, but we'll end up focusing on the bottom 5-6 in each.

Anyway, back to the details.

Orchestrator

The bottom three lines of all three stacks are identical, and they show the entrypoint from the C side to Java.

The job of XspCmdManager is to take a bunch of handles and flags given to it by the C side, wrap them into something a little more suitable for polite company, and pass that off immediately to LCDEnvironment. At this level, while the code is clearly focused around getting to the point of handling Servlets, the actual classes don't actually implement javax.servlet parts - they're all a little more abstract than that. XspCmdManager has a few other responsibilities, but it's best thought of as just the glue layer between the native side and the Java stack.

From there, it passes the request along to LCDEnvironment, as the sole implementation of LCDRequestHandler. Things get a little meatier here. This is the part that loads up all of the HttpService implementations we've discussed before. It uses these services to answer calls to isXspUrl (which basically means "should Java handle this?") and then to handle the incoming requests. The first HttpService (sorted by its getPriority() value) that will handle the incoming request gets it, and here's our first point of divergence above. NSFService is the in-NSF XPages-and-stuff handler, taking care of requests with ".nsf" and either "xsp" or a registered extension in the path. OSGIService, for its part, handles both Equinox-registered servlets and Expeditor webapps, albeit in different ways.

ComponentModules

The next layer is interesting, and it's a part I didn't really talk about in the previous post. While an HttpService can just handle an incoming request directly (as the proxy service in the Domino Open Liberty Runtime project does), the idiom in the Domino stack is to use ComponentModule implementations to do it. These correspond conceptually to web apps deployed from WAR files in a standard app server: they're a cordoned-off blob of user code, with its own ClassLoader and notions for how to access resources.

For an NSF, this is NSFComponentModule. These objects are spawned by NSFService as needed when a matching request comes in for an NSF the first time and create a weird sort of webapp out of the NSF contents (with special handling for Single-Copy XPage Design). It doesn't go as far in that direction as the full web container, but it's enough to run and retain the XPages application. This type of module also opts in to the IServletFactory extension system, where contributors can add Servlets to the module either internally or via an OSGi extension. All ComponentModule types have the ability to opt in to this, but NSFComponentModule is the only one that does in practice.

Though both the Equinox Servlet and the PVC web container app go through OSGIService, here is where they split apart into OSGIModule (for standalone Servlets) and OSGIWebContainerModule.

OSGIModule uses some of the parts of the Equinox Servlet Container support to run an individual Servlet. It's the smallest of the three and mostly just creates the ServletRequest et al implementations, does a little wrapping in Equinox garb, and passes them on to your HttpServlet class.

OSGIWebContainerModule is fancier, since its job is to run (most of) a JEE-style web.xml-based app contained in an OSGi bundle. To do this, it uses a chopped-down fork of WebSphere - indeed, many of the classes in the stack still exist in much the same form in Liberty, but here are intermingled with Domino-specific stuff and some Expeditor PvC detritus. This isn't quite as capable as a full Servlet 2.5 container, lacking things like Filters and various listeners, but it gets the job done.

Module Adaptability

Though this system hasn't in practice grown beyond the built-in implementations to my knowledge, it's a neat little structure. There's some unfinished stuff in there in a mashupmaker package that I guess must have been to do with Lotus Mashups (which is apparently a product that existed at some point), but that's about it as far as extending it.

This is an intriguing layer, though, since it's also the one where the "adapter" Servlet objects from above are actually turned into instances of javax.servlet classes, specifically using classes like LCDAdapterHttpServletResponse. At this layer, you have a good amount of Java scaffolding, but being before the full conversion to javax.servlet classes means that a lot of the limitations in Domino's Servlet support only really come in in the upper echelons. Other than network- and HTTP-layer technologies like WebSocket and HTTP/2 (which are handled at the native layer), it'd be entirely possible to plug into this system with more-modern technologies while still participating cleanly in the environment. For example, you could write an HttpService that declares a higher priority than NSFService and use it to treat an NSF as an entirely-different sort of app, intercepting all URLs prefixed with it. I don't know that it would be a good idea to do so, but it's possible, and it's fun to think about.

What To Do With All This XSP Markup? Redux

Feb 17, 2022, 2:53 PM

Tags: jsf xpages

About a year ago, I wrote a post discussing what I saw as potential future options for all the XPages code we've collectively written over the years. That post was written with the assumption that the future life of an XPages app isn't XPages as it exists today - while that's still a possibility, it's a lot less fun to speculate about.

As it has been for years, this remains on my mind, but my recent addition of JSF to the XPages Jakarta EE project caused it to bubble back to the top of my musings.

Conceptual Tools

Before getting into some speculative specifics, I'd like to first take a step back and assess the concepts we're working with.

In that post, and as I frequently do, I referred back to an old post of mine discussing how an XPage is best thought of as a tree of components, one most-commonly described by way of the XSP markup language. The critical concept there is that there's nothing that inherently ties an XPage to the specific thing that sends HTML to the browser, and all the moreso when you're talking about the XML. While "XPages" as an entity on Domino encompasses an entire stack that starts just barely above the bottom of nHTTP, "an XPage" as such is almost always just XML that describes what you want to happen on the page. There are parts of it that are more specific than others, for sure: <xp:inputText/> has equivalents in a million other languages, but value="#{javascript: ...}" starts getting into some gnarly implementation specifics. Still, the point is that anything that can successfully interpret XSP markup can be considered functionally "XPages" for most uses.

The next handy concept to keep in mind is related, and that's that code is data. This is most apparent with an XML-based language like XSP, but it applies to everything. SSJS code is absolutely data: it's just strings that happen to be parsed out at runtime as ASTs and then executed. For SSJS, there's no explicit guarantee that facesContext refers to an instance of javax.faces.context.FacesContext or one of its com.ibm.xsp subclasses. Heck, there's no specific requirement that even referring to the class by name would reference the same thing. It's just data and can be interepreted as the runtime seems fit.

And that goes further: Java bytecode is just data too. OSGi has a mechanism to alter classes at load time that already exists and works great on Domino. Also pertinently, Eclipse Transformer is a tool that translates code (source or compiled) that references javax.* JEE classes to jakarta.* classes, and is used commonly now by webapp runtimes to support both legacy and new apps with the same codebase. Fancy stuff, that.

The Musing

So back to what I've been pondering. I think I did mostly a good job covering the bases in my original post, but time and experience has given me further perspective.

JSF Driver For XSP

When I first mentioned it, I brushed the concept of writing a driver for JSF off a bit - not fully, but I did give it a shorter shrift than it deserved.

To begin with, this concept exists in JSF: the view declaration language. Now, it's not actually used for this sort of "transplant" thing, mind you: in practice, it's a concept that was used to transition from JSP as the language to write JSF pages to Facelets and not much else. Still, there's no inherent reason why one couldn't write a VDL driver for JSF that would interpret XSP markup and create components based on it. This is especially true because XSP itself doesn't really contain any real curveballs: it's a pretty basic mapping of tags to component names and attributes to bean properties, with the main hiccup being <xp:this.action>-type complex properties.

Components

I've also given some thought to that tag-to-component mapping. One way to do that would be to make an interpreter that sees <xp:inputText/> and, instead of translating it to com.ibm.xsp.component.UIInputText, would instead translate it to a newer stock component. I thereby brushed it off as being properly too complex to be workable, as XPages's nature as a hard fork of JSF meant that things like the java.faces.component.UIComponent#_xspGetStateId method would prove intractable.

I'm less sure of this difficulty now, though. The switch from javax.faces to jakarta.faces means there's room for coexistence on the same classpath, as I now have in the JEE project. This means there's a lot of room for adapting older code unchanged by way of using wrapper objects and proxies.

For the former, what I mean is that, while a JSF 4.x implementation like MyFaces 4 now has no knowledge of what a javax.faces.component.UIInput is, it doesn't have to: all it needs is a subclass of jakarta.faces.component.UIComponent that can fit into a tree and respond to normal JSF calls like processUpdates. The "real" JSF stack in front could pass in incoming form data from the client just the same as it does now, and a wrapped legacy XPages class could handle it exactly as it does now. Things get more complicated than that, but not impossibly so.

This is similar to all the Servlet-object wrapping and unwrapping I do in the JEE support project. These wrappers interpret and route equivalent method calls to the delegates they're wrapping, and so Servlet 5 code doesn't care that it's working with a Servlet 2.4 request, and similarly the Servlet 2.4 code can continue to know nothing at all about Servlet 5. The glue code handles the simple wrapper-based translations between the layers.

So, in this way, the XPages fork of JSF could remain essentially untouched. As long as the runtime does things like inserting an appropriate old-style FacesContext object into javax.faces.context.FacesContext and the like, no one needs to be the wiser.

And this is where Java object proxies could come into play. Where I've above said things like "an instance of javax.faces.context.FacesContext" or "an... object", that doesn't even need to be a real class that you construct with new Foo(). While Java's built-in proxies only work with interfaces, libraries like Javassist can generate proxy objects for true classes as well. One could have a proxy object that checks to see if an incoming method is compatible with the new style and use that one, or otherwise route to a wrapped class, and yet remain class-cast compatible with old code. I'm not sure this would be required if wrapping was done fully, but it's good to know as an option.

Code

Then there's user code. I was going to have a bunch of stuff to say in this section, but it's actually all largely covered by the steps that would be necessary for keeping runtime code compatible, if that were the route to take. If user code calls javax.faces.context.FacesContext#getCurrentInstance, then it'd get a legacy object; if it calls jakarta.faces.context.FacesContext#getCurrentInstance, then it'd get the new-era one. That'd be the same as the work necessary to keep the old parts chugging along.

Now, if the old parts were to change - were the task to be to make it so that the com.ibm.xsp classes are based on JSF 4.x - then there'd be some fiddling to do for user code. But, again, it'd be largely the same idea. You could either translate the code as it's loaded (for Java) or executed (for SSJS) or you could pass wrapper objects around. So all the same ideas apply.

Coexistence

I've also been thinking a lot about the notion of an "XPages 2" or the like existing alongside XPages as it is now. For example, you could imagine a variant of "XPages" that is indeed a JSF view definition language that interprets XSP markup, but which doesn't attempt to guarantee pure compatibility with existing code. Maybe it'd get 80% of the way there - it'd interpret components in largely the same way, but wouldn't guarantee that every little bit of code that dives down into the runtime implementation would work identically, and wouldn't guarantee that every fiddly detail of the XPages lifecycle and its optimizations would execute in the exact same way.

In such a scenario, there wouldn't be any particular need to remove or change the old stuff. This is the scenario that JEE app servers all faced with the jakarta.* move: you can't realistically expect every app to be transformed (automatically or otherwise) to work with the new versions, especially because old apps may target even-older specs that have since had other breaking changes.

In Domino, there are a few ways one could go about accomplishing this. One way would be to do kind of like what I do for JSP and JSF: tell NSFService to handle a given extension and then write a factory to handle it. In this way, you could declare a new extension, say ".xspx" (note: please do not use this extension), and then route incoming requests to a handler that's like normal XPages but not 100% compatible.

Alternatively, you could do something I've pondered for a while, which is to make another HttpService implementation that would check for a flag in the database referenced in incoming .nsf-containing URLs to see if it's set to "new mode" and, if so, interpret the request however which way it would like. Such a service could disregard all old rules for URL handling if it wanted, and could worry much less about the potential of cross-contamination between request types. It'd be more work, but is an intriguing possibility.

Conclusion

So is any of this the right way to go about dealing with our dear old friend XPages? I don't know - maybe. This is all pretty off-the-cuff, and I haven't necessarily thought through all the implications, but I do feel like there's more wiggle room here than I'd originally assumed. It's interesting to think about, at the very least.

JSF in the XPages Jakarta EE Support Project

Feb 11, 2022, 2:31 PM

Tags: jakartaee jsf
  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

When I talked about adding Jakarta NoSQL to the JEE support project, I mentioned how that in a way completed the pitch for this project as a full development mechanism. With data access and MVC+JSP in place, now it provides tools to build REST-based or server-rendered UIs backed cleanly by Domino data. Neat!

But, much like I had previously danced around the question of data access, there was still a spectre lurking in the background, a long-standing part of the Java/Jakarta EE platform: JSF, now named Jakarta Server Faces. This has unsurprisingly been gnawing at the back of my mind, since "upgrade JSF" has been one of the longest-running requests for XPages, starting basically right after JSF 1.2 came out with a smattering of technologies XPages didn't get.

While the notion of un-forking XPages is intriguing, it's its own big pile of work, and not in this project's bailiwick to address. What this project can do, though, is bring current JSF to an NSF alongside XPages. This proposition has a lot going on, so I think it'll be worth discussing both the practical elements and the implications.

How Is JSF Doing Nowadays, Anyway?

To start out with, it will be instructive to look at how JSF has fared since XPages split off from it in the JSF 1.1 era. We'll use the version history from Wikipedia as a starting point:

  1. 2004 - JSF 1.0
  2. 2004 - JSF 1.1 (minor release)
  3. 2006 - JSF 1.2 (feature release, included in Java EE)
  4. 2009 - JSF 2.0 (major release with significant improvements)
  5. 2010 - JSF 2.1 (minor release)
  6. 2013 - JSF 2.2 (feature release)
  7. 2017 - JSF 2.3 (feature release)

So... this is a real mixed bag, huh? Since XPages, JSF received a few major feature releases, progressing to 2.3. But, uh, the gaps in the years, though - that's ominous. And 2017 was the last feature update? Hrm.

But wait - when I tweeted about it, the screenshot says "JSF 4.0". What's going on there?

Well, like all active JEE specs, Faces received a major-version bump for semantic-versioning purposes when moving to the jakarta.* namespace. Version 3.0 came out in 2020, but was functionally identical to 2.3 from 2017, just with the API package names changed.

After that point, though, the future is in the Eclipse Foundation's hands, and each Jakarta spec is deciding its fate for future releases. Some specs, like JSP, are focusing on non-breaking releases that focus on just a handful of fixes and clarifications. Faces, though, seems to have be the focus of some pent-up desires for change that are coming to fruition now that there's headroom for it.

Faces 4.0 gets another major-version bump because it involves some breaking changes that go beyond just the package names, including outright removing old deprecated features in favor of better modern options. Beyond that, it gains some outright new features - not the sort of features that would change a developer's life, but useful stuff.

Does this mean that JSF is in the prime of its life? Well, time will tell, and merely having developers settle what was presumably very old business doesn't guarantee a vibrant future, but it's a nice sign. In any event, it's not dead, so it has that going for it. Plus, the repositories for "ecosystem" libraries like PrimeFaces and Tobago are quite active, which is important.

So Is It Good? Is This How We Should Write Apps?

Maybe! I'm not sure. I mean, there's no getting around the general popularity of client-JS-first app dev, and the server side of that is covered well by JAX-RS. That said, current popularity isn't everything when it comes to development: it's all about the problems it solves. For the type of development usually done for Domino, server-side apps fit quite well, and even an exceedingly-adept developer can often get results quicker in an integrated stack of this type.

In any event, I expect I'll give it a shot for a bit and see how it shakes out.

How Does This Even Work? Doesn't It Conflict With XPages?

I'm glad you asked! Part of the problem I'd tangled with with this project from the start was that the aging versions of the various JEE specs - and the XPages JSF fork - are essentially unmovable obstacles. If I want to have a project that enhances Domino in place rather than replacing it in whole or in part, I have to deal with the stuff that's there.

Fortunately, like with pseudo-updating the core Servlet spec, the javax.*-to-jakarta.* namespace move is my savior. Honestly, being a trademark stickler may be the best thing Oracle's ever done from my perspective. Thanks to this switch, all the concerns API-side about distinguishing versions disappeared. While I might know that javax.faces.context.FacesContext and jakarta.faces.context.FacesContext are competing versions of the same spec, Java doesn't know - they're completely unrelated.

However, as with some other components, the API isn't the whole story. While the API packages all changed for Jakarta EE 9, the implementations generally kept their original packages. XPages is based on the original JSF implementation, which is now Mojarra over at Eclipse. This implementation uses the com.sun.faces namespace. While most of the XPages-specific stuff is under com.ibm.xsp, all those Sun-branded classes are still floating around, like com.sun.faces.RIConstants and com.sun.faces.context.FacesContextImpl. Moreover, XPages still uses com.sun.faces prefixes for stored application properties. For example, there's an ApplicationAssociate object that Sun-JSF and XPages use to keep track of app-specific information, and it stashes this in the conceptual NSF application. Using Mojarra, these sorts of classes and properties conflict, and things got hairy, with me trying to stash the com.sun stuff to the side per-request, with only partial success.

Fortunately, Mojarra isn't the only game in town: Apache MyFaces is alive and well, including with an actively-developed 4.0 branch. This implementation does not derive from the original, and doesn't use the com.sun.faces namespace. As an intriguing tidbit, Notes (but not Domino) actually includes MyFaces 1.1; I'm guessing it's another thing in there to support some kind of "Social" foofaraw.

Anyway, MyFaces was my ticket to success. Not only did it remove the problem of needing to walk on eggshells with package and attribute names, but the fact that it's a wholly-different implementation removed some of the other weird problems I was dealing with from the table.

Required Infrastructure Improvements

Unlike some of the more staid specs (like JSP) or cutting-edge ones (like MVC and NoSQL), both JSF implementations lean heavily on Servlet-spec features that showed up after Domino's 2.4 implementation. While they fortunately don't require filters, they do heavily use listeners of many stripes.

So, like when I had to hack together RequestDispatcher support to implement MVC, I had to do similarly here to support attribute listeners and manually kick of lifecycle event notifications.

I realize I'm kind of baking my way into making a full Servlet 5 container on top of Domino's rickety old one. There's way more work to make that an actual thing, but it's intriguing seeing it take shape just as incidental work from implementing other parts.

Next Steps

For the next steps, I'm not quite sure. For what it is, I think it's all working pretty well. However, while JSF on its own provides a lot of functionality, it's stuff like the component libraries from PrimeFaces that makes it a full toolkit. In a normal case, the server itself wouldn't provide component packs like PrimeFaces or Tobago - the app itself would bring them in as Maven/etc. dependencies and they would be included as part of the WAR. This would probably work in an NSF, but the experience of developing in an NSF when you have JARs in the classpath is... not great. Accordingly, I'm pondering including one or both of those in the project. It'd have tradeoffs, but it might make sense, or maybe it'd make sense to have associated projects that package them as distinct libraries to include. We'll see.

Video Series On The XPages Jakarta EE Project

Feb 7, 2022, 3:54 PM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

Over the last two weeks, Graham Acres and I recorded a video series for OpenNTF about my XPages Jakarta EE Support project, which has seen a flurry of development in the last few months. The 15-part series is up on YouTube:

The project itself saw the release of version 2.3.0 today, which is the first release with the Jakarta NoSQL driver I blogged about recently.

I think the project has turned into a pretty-interesting "platform update" for XPages, and I hope the video series captures that a bit. I'm still mulling over a sort of "thesis statement" about the whole thing, but for now describing the various new capabilities and how they interact will have to suffice.

Implementing a Basic JNoSQL Driver for Domino

Jan 25, 2022, 1:36 PM

  1. Updating The XPages JEE Support Project To Jakarta EE 9, A Travelogue
  2. JSP and MVC Support in the XPages JEE Project
  3. Migrating a Large XPages App to Jakarta EE 9
  4. XPages Jakarta EE Support 2.2.0
  5. DQL, QueryResultsProcessor, and JNoSQL
  6. Implementing a Basic JNoSQL Driver for Domino
  7. Video Series On The XPages Jakarta EE Project
  8. JSF in the XPages Jakarta EE Support Project
  9. So Why Jakarta?

A few weeks back, I talked about my use of DQL and QRP in writing a JNoSQL driver for Domino. In that, I left the specifics of the JNoSQL side out and focused on the Domino side, but that former part certainly warrants some expansion as well.

Background

As a quick overview, Jakarta NoSQL is an approaching-finalization spec for working with NoSQL databases of various stripes in a Jakarta EE app. This is as opposed to the venerable JPA, which is a long-standing API for working with RDBMSes in JEE.

JNoSQL is the implementation of the Jakarta NoSQL spec, and is also an Eclipse project. As a historical note, the individual components of the implementation used to have Greek-mythological names, which is why older drivers like my Darwino driver or original Domino driver are sprinkled with references to "Diana" and "Artemis". The "JNoSQL" name also pre-dates its reification into a Jakarta spec - normally, spec names and implementations aren't quite so similarly named.

The specification is broken up into two main categories. The README for the implementation describes this well, but the summary is:

  1. "Communication" handles interpreting JNoSQL CRUD operations and actually applying them to the database.
  2. "Mapping" handles what the app developer interacts with: annotating classes to relate them to the back-end database and querying object repositories.

An individual driver may include code for both sides of this, but only the Communication side is obligatory to implement. A driver would contribute to the Mapping side as well if they want to provide database-specific higher-level concepts. For example, the Darwino driver does this to provide explicit annotations for its full-text search, stored-cursor, and JSQL capabilities. I may do similarly in the Domino driver to expose FT search, view operations, or DQL queries directly.

Jakarta NoSQL handles Key-Value, Column, Graph, and Document data stores, but we only care about the last category for now.

Implementation Overview

Now, on to the actual implementation in question. The handful of classes in the implementation fall into a few categories:

Implementation Details

The core entrypoint for data operations is DefaultDominoDocumentCollectionManager, and JNoSQL specifies a few main operations to implement, basically CRUD plus total count:

  • insert and its overrides handle taking an abstract DocumentEntity from JNoSQL and turning it into a new lotus.domino.Document in the target database.
  • update does similarly, but with the assumption that the incoming entity represents a modification to an existing document.
  • delete takes an incoming abstract query and deletes all documents matching it.
  • select takes an incoming abstract query, finds matching documents, converts them to a neutral format, and returns them to JNoSQL. This is what my earlier post was all about.
  • count retrieves a count for all documents in a "collection". "Collection" here is a MongoDB-ism and the most-practical Domino equivalent is "documents with a specific Form name".

Entity Conversion

The insert, update, and select methods have as part of their jobs the task of translating between Domino's storage and JNoSQL's intermediate representation, and this happens in EntityConverter.

Now, this point of the code has some... nomenclature-based issues. There's lotus.domino.Document, our legacy representation of a Domino document handle. Then, there's jakarta.nosql.document.Document: this oddly-named interface actually represents a single key-value pair within the conceptual document - roughly, this corresponds to lotus.domino.Item. Finally, there's jakarta.nosql.document.DocumentEntity, which is the higher-level representation of a conceptual document on the JNoSQL side, and this contains many jakarta.nosql.document.Documents. This all works out in practice, but it's important to know about when you look into the implementation code.

The first couple methods in this utility class handle converting query results of different types: QRP result JSON, QRP result views, and generic DocumentCollections. Strictly speaking, I could remove the first one now that it's unused, but there's a non-zero chance that I'll return to it if it ends up being efficient down the line.

Each of those methods will eventually call to toDocuments, which converts a lotus.domino.Document object to an equivalent List of JNoSQL Documents (i.e. the individual key-value pairs). Due to the way JNoSQL works, this method has no way to know what the actual desired fields the higher level will want are, so it attempts to convert all items in the document to more-common Java types. There's much more work to do here, some of it based on just needing to add other types (like improving rich text handling) and some of it based on needing a better Notes API (like proper conversion from Notes times to java.time).

In the other direction, there's the method that converts from a JNoSQL DocumentEntity to a Domino document, which is used by the insert and update methods. This converts some known common incoming types and converts them to Domino item values. Like the earlier methods, this could use some work in translating types, but that's also something that a better Notes API could handle for me.

Query Conversion

The QueryConverter class has a slightly-simpler job: taking JNoSQL's concept of a query and translating it to a document selection.

The jakarta.nosql.document.DocumentQuery type does a bit of double duty: it's used both for arbitrary queries (Foo='Bar'-type stuff) as well as for selecting documents by UNID. The select method covers that, producing a QueryConverterResult object to ferry the important information back to DefaultDominoDocumentCollectionManager.

The core work of QueryConverter is in getCondition, where it performs an AST-to-DQL conversion. JNoSQL has a couple mechanisms for querying entities: explicit Java-based queries, implicit queries based on repository definition, or a SQL-like query language. Regardless of what the higher level does to query, though, it comes to the Communication driver as this tree of objects (technically, the driver can handle the last specially, but by default it arrives parsed).

Fortunately, this sort of work is a common sort of idiom. You start with the top node of the tree, handle it based on its type and, as needed, recurse down into the next node. So if, for example, the top node is an EQUALS, all this converter needs to do is return the DQL representation of "this field equals that value", and so forth for other comparison operators. If it encounters AND, OR, or NOT, then the job changes to making a composite query of that operator plus the results of converting whatever the operator is applied to - which is where the recursion back into the same method comes in.

Future Work

The main immediate work to do here is enhancing the data conversion: handling more outgoing Domino item types and incoming Java object types. A good deal of this can be done as-is, but doing some other parts reliably will be best done by changing out the specific Notes API in use. I used lotus.domino because it's present already, but it's a placeholder for sure.

There are also a bunch of efficiency tweaks I can make: more lazy loading in conversion, optimizing data fetching for specific queries, and logging DQL explain results for developers.

Beyond that, I'll have to consider if it's worth adding extensions to the mapping side. As I mentioned, the Darwino driver has some extensions for its JSQL language and similar concepts, and it's possible that it'd be worth adding similar things for Domino, in particular direct FT searching. That said, DQL does a pretty good job being the all-consuming target, and so translating JNoSQL queries to DQL may suffice to extract what performance Domino can provide.

So we'll see. A lot of this will be based on what I need when I actually put this into real use, since right now it's partly hypothetical. In any event, I'm looking forward to finding places where I can use this instead of explicitly coding to Notes API objects for sure.

Building a Full Domino Image for JUnit Tests

Jan 23, 2022, 3:57 PM

Tags: docker domino
  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

Last year, I wrote about how I built images to use Testcontainers to run tests against a Liberty app that uses a Domino runtime. In that situation, I used the Domino Docker image from Flexnet but then pulled out the program files and stock data, mixed with pre-configured server support files from the repository.

Recently, though, I've had a need to have a similar setup, but altered in three ways:

  1. It should fully run Domino, not just use the data and program files for the runtime in Liberty
  2. It should not require pre-populating a server ID, notes.ini, and names.nsf from outside - that is, it should be self-configuring
  3. It should also have an extra component installed, one that must be installed after Domino is configured but before the image is fully built
  4. On the next launch, I need a post-install agent to run for final configuration, and the tests need to wait on that

Additionally, it should still be runnable with the same basic tools - the image should be built and the container started and destroyed by automated tests. This stricture comes into play in weird ways.

One-Touch Setup

The second requirement - that the container be self-configuring - is handled adroitly by One-Touch Setup, the feature in Domino 12 and above that lets you specify a configuration JSON file. I used this here in basically the same way as I described in that post: the script sets up and certifies a throwaway domain with a known admin username + password, and also deploys a few NTFs on first proper launch. Since this server intentionally doesn't communicate with the outside world, I don't need to provide any external files like a certificate authority or server ID.

Switching the Baseline

Initially, I continued to use the official image from Flexnet. However, I had run into some trouble with multi-stage builds using this earlier. I lack enough Docker knowledge to be sure, but my suspicion is that it declares /local/notesdata as an expected externally-provided volume. This on its own is fine, but it interacts oddly with automated container building. The trouble comes in when you do something like this:

1
2
3
4
5
6
7
FROM domino-docker:V1201_11222021prod
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/domino-config.json"
COPY --chown=notes:wheel domino-config.json /local/
RUN /local/start.sh

RUN /some/script/that/uses/notesdata

By the time it hits that second RUN line using automated build mechanisms, /local/notesdata is depopulated. I'm not sure why this is different from a command-line docker build, but it is what it is.

Fortunately, the "community" version of the image builder from https://github.com/IBM/domino-docker doesn't exhibit this behavior. I had been considering switching over already, so this made the decision all the easier.

Breaking Up Setup Into Stages

With this change in hand, I was able to more-properly break up the setup into multiple stages. This is important both for requirement #3 above and because it allows Docker to cache intermediate results. Though I want the server to auto-configure itself when building, I don't need the results of that to be different every run, and thus I can save tons of time in subsequent launches if I handle these caches well.

So my Dockerfile started to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM hclcom/domino:12.0.1
# Relay Domino output to the container logs
ENV DOMINO_DOCKER_STDOUT "yes"
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/DominoAutoConfig.json"

COPY --chown=notes:wheel domino-config.json /local/DominoAutoConfig.json
COPY --chown=notes:wheel notesdata/* /local/notesdatatemp/

RUN /domino_docker_entrypoint.sh

# Copy the app executable and support scripts
USER root
COPY appinstall /local/appinstall
RUN chmod +x /local/appinstall/install.sh && /local/appinstall/install.sh

# Back to notes user for the Domino entrypoint
USER notes

But here I hit another distinction between docker build and the automated mechanisms: that RUN /domino_docker_entrypoint.sh line would execute and get to the point where it emits "Application configuration completed successfully", but then would not actually exit properly. Again, I'm not sure why: the JSON file tells the server to not launch after configuration, and indeed it doesn't, but the script just doesn't relinquish control - but does when built from the command line.

So I rolled up my sleeves and wrote a wrapper script to kill the process when it's known to be done:

1
2
3
#!/usr/bin/env bash
/domino_docker_entrypoint.sh > /tmp/domsetup &
until tail -f /tmp/domsetup | grep -q "Application configuration completed successfully"; do : sleep 1; done

This runs the setup process, redirecting the output to a temp file, in the background. Then, it watches that file for the known ending text and exits when observed. A little janky in multiple ways for multiple reasons, but it works. That allows image build to progress normally to the next step in all environments, caching the results of the initial server setup. I replaced the RUN /domino_docker_entrypoint.sh line above with copying in and executing this script, and all was well.

Post-Install Agent

After the "appinstall" step, I have the peculiar need to then run code in a Notes context to fiddle with some components that aren't configured earlier. For now, I've settled on writing an agent that runs on server start, then signing and enabling it in the domino-config.json file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
    "action": "create",
    "filePath": "postinstall.nsf",
    "title": "Post Install",
    "templatePath": "/local/notesdatatemp/postinstall.ntf",
    "signUsingAdminp": true,
    "agents": [
        {
            "name": "PostInstall",
            "action": "sign"
        },
        {
            "name": "PostInstall",
            "action": "enable"
        }
    ]
}

Originally, I had this agent emit the text "postinstall done" when it finished, so that the Testcontainers runtime could look for that to know when it's safe to execute tests. However, this added a good while to the launch stage: at this point, launching the container has to wait on final post-install tasks from Domino, then signing the DB with adminp, then actually executing the agent. This added about a minute to the test pre-run time, and thus was a prime target for further caching.

So I altered the agent to check to see if it actually needs to work and then, if so, shuts down the server when it's done:

1
2
3
4
5
6
Session session = getSession();

if(needToDoWork()) {
    doWork();
    session.sendConsoleCommand(session.getServerName(), "q");
}

Then, I altered my Dockerfile to amend the end bit:

1
2
3
4
5
6
7
8
9
# ...snip
# Back to notes user for the Domino entrypoint
USER notes

# Run the server once to wait for postinstall to execute, then shut down
WORKDIR /local/notesdata
RUN /opt/hcl/domino/bin/server

# Now let the entrypoint launch again, without requiring further configuration

Again: janky, but it works. Now, all of the setup stages are cached on subsequent runs.

Building the Image in Testcontainers

In my original post, I showed using dockerfile-maven-plugin to build the image just before executing the tests. This works, but it complicated the pom.xml a bit and meant that running the tests in an IDE meant first running a Maven build. Not the end of the world, but not ideal.

Fortunately, Testcontainers can also build images. That meant that, rather than building the image in Maven and then re-using the same-named container in Java, I could do it all Java-side. To do this, I created a subclass of GenericContainer to centralize the configuration of the container:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
package example;

import java.io.IOException;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.wait.strategy.HttpWaitStrategy;
import org.testcontainers.images.builder.ImageFromDockerfile;

public class DominoAppContainer extends GenericContainer<DominoAppContainer> {
    public static class DominoAppImage extends ImageFromDockerfile {
        public DominoAppImage() {
            super("example-app-it-testcontainers:1.0.0", false);
            withFileFromClasspath("Dockerfile", "/docker/Dockerfile");
            withFileFromClasspath("domino-config.json", "/docker/domino-config.json");
            withFileFromClasspath("firstrun.sh", "/docker/firstrun.sh");
            withFileFromClasspath("notesdata/exampledata.ntf", "/docker/notesdata/exampledata.ntf");
            withFileFromClasspath("notesdata/postinstall.ntf", "/docker/notesdata/postinstall.ntf");
            withFileFromClasspath("appinstall/install.sh", "/docker/appinstall/install.sh");
            withFileFromClasspath("appinstall/appinstall.jar", "/docker/appinstall/appinstall.jar");
        }
    }

    public DominoAppContainer() {
        super(new DominoAppImage());

        withImagePullPolicy(imageName -> false);
        withExposedPorts(80);
        waitingFor(
            new HttpWaitStrategy()
            .forPort(80)
            .forStatusCodeMatching(code -> code >= 200 && code < 400)
        );
        withLogConsumer(frame -> {
            switch (frame.getType()) {
                case STDERR:
                    try {
                        System.err.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                case STDOUT:
                    try {
                        System.out.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                default:
                case END:
                    break;
            }
        });
    }
}

It's a little fiddlier in that now I need to enumerate each classpath resource to copy in, but that also makes it all the more portable. I removed the dockerfile-maven-plugin execution from the pom.xml and switched to this instead. Since I name the image and tell it to not auto-delete on completion, this retained the desired caching behavior.

Conclusion

Overall, this whole process brought the test pre-launch time (after the first run) down from 3-5 minutes to about 20 seconds while also reducing the split between the Maven config and Java. Much more bearable when making small tweaks and re-running the suite, and it makes it a bit more explicable what's going on.