Lessons From Fiddling With RunJava

Mar 3, 2020, 9:49 AM

Tags: java websphere

The other day, Paul Withers wrote a blog post about RunJava, which is a very-old and very-undocumented mechanism for running arbitrary Java tasks in a manner similar to a C-based addin. I had vaguely known this was there for a long time, but for some reason I had never looked into it. So, for both my sake and general knowledge, I'll frame it in a time line.

History

I'm guessing that RunJava was added in the R5 era, presumably to allow IBM to use existing Java code or programmers for writing server addins (with ISpy being the main known one), and possibly as a side effect of the early push for "Java everywhere" in Domino that fell prey to strategy tax.

Years later, David Taib made the JAVADDIN project as a "grown up" version of this sort of thing, bringing the structure of OSGi to the idea. Eventually, that morphed into DOTS, which became more-or-less supported in the "Social Edition" days before meeting a quiet death in Domino 11.

The main distinction between RunJava and DOTS (other than RunJava still shipping with Domino) is the thickness of the layer above C. DOTS loads an Equinox OSGi runtime very similar to the XPages environment, bringing in all of the framework support and dependencies, as well as services of its own for scheduled task and other options. RunJava, on the other hand, is an extremely-thin layer over what writing an addin in C is like: you use the public static void main structure from runnable Java classes and you're given a runNotes method that are directly equivalent to the main and AddinMain function used by C/C++ addins.

Utility

Reading back up on RunJava got my brain ticking, and it primarily made me realize that this could be a perfect fit for the Open Liberty Runtime project. That project uses the XPages runtime's HttpService class to load immediately at HTTP start and remain resident for the duration of the lifecycle, but it's really a parasite: other than an authentication-helper servlet, the fact that it's running in nHTTP is just because that's the easiest way to run complicated, long-running Java code. For a while, I considered DOTS for this task, but it was never a high priority and has aged out of usefulness.

So I decided to roll up my sleeves and give RunJava a shot. Fortunately, I was pretty well-prepared: I've been doing a lot of C-level stuff lately, so the concepts and functions are familiar. The main run loop uses a message queue, for which Notes.jar provides an extremely-thin wrapper in the form of lotus.notes.internal.MessageQueue. And, as Paul reminded me, I had actually done basically this same thing before, years ago, when I wrote a RunJava addin to maintain a Minecraft server alongside Domino. I'd forgotten about that thing.

Lessons

Getting to the thrust of this post, I think it's worth sharing some of the steps I took and lessons I learned writing this, since RunJava is in a lot of ways much more hostile a place for code than the cozy embrace of Equinox.

#1: Don't Do This

The main lesson to learn is that you probably don't want to write a RunJava task. It was already the case that DOTS was too esoteric to use except for those with particular talent and needs, and that one at least had the advantage of being kind-of documented and kind-of open source. RunJava gives you almost no affordances and imposes severe restrictions, so it's really just meant for a situation where you were otherwise going to write an addin in C but don't want to have to set up a half-dozen compiler toolchains.

#2: Lower Your Dependencies Dramatically

The first big general thing to keep in mind is that RunJava tasks, if they're not just a single Java class file, are deployed right to the main domino JRE, either in jvm/lib/ext or in ndext. What this means is that any class you include in your package will be present in absolutely everything Java-related on Domino, which means you're in a minefield if you want to bring in any logging packages or third-party frameworks that could conflict with something present in the XPages stack or in your own higher-level Java code.

This is a fiddlier problem than you'd think. A release or so ago, IBM or HCL added a version of Guava to the ndext folder and it wreaked havoc on the version my client's app was using (which I think came along for the ride from ODA). You can easily get into situations where one class for a library is loaded from XPages-level code and another is loaded from this low level, and you'll end up with mysterious errors.

Ideally, you want no possible class conflicts at all. I took the approach of outright white-labeling some (compatibly-licensed) code from Apache and IBM Commons to avoid any possibility of butting heads with other code on the server. I was also originally going to use the Darwino NAPI or Domino JNA for a nicer Message Queue implementation, but scuttled that idea for this reason. It's Notes.jar or bust for safe API access, unfortunately.

#3: Use the maven-shade-plugin

This goes along with the above, but it's more a good tool than a dire warning. The maven-shade-plugin is a standard plugin for a Maven build that lets you blend together the contents of multiple JARs into one, so you don't have to have a big pool of JARs to copy around. That on its own is handy for deployment, but the plugin also lets you rename classes and aggregate and transform resources, which can be indispensable capabilities when making a safe project.

#4: Make Sure Static Initializers and Constructors are Clean

What I mean by this one is that you should make sure that your JavaServerAddin subclass does very little during class loading and instantiation. The reason I say this is that, until your class is actually loaded and running, the only diagnostic information you'll get is that RunJava will say that it can't find your class by name - a message indistinguishable from the case of your class not even being on the server at all. So if, for example, your class references another class that's missing or unresolvable at load time (say, pointing at a class that implements org.osgi.framework.BundleActivator, to pick one I hit), RunJava will act like your code isn't even there. That can make it extremely difficult to tell what you're doing wrong. So I found it best to make very little static other than JVM-provided classes and to delay creation/lookup of other objects and resources (say, translation bundles) until it was in the runNotes method. Once the code reaches that point, you'll be able to get stack traces on failure, so debugging becomes okay again.

#5: Take Care With Threads When Terminating

The Open Liberty runtime makes good use of java.util.concurrent.ExecutorServices to run NotesThread code asynchronously, and I'll periodically execute even a synchronous task in there to make sure I'm working with a properly-initialized thread.

However, when terminating, these services will start to shut down and reject new tasks. So if, for example, you had code that executes on a separate thread and might be run during shutdown, that will fail likely-silently and can cause your addin to choke the server.

#6: That Said, It's a Good Idea to Use Threads

A habit I picked up from writing Darwino's cluster replicator is to make your addin's main Message Queue loop very simple and to send messages off to a worker thread to handle. Doing this means that, for complex operations, the server console and the user won't sit waiting on a reply while your code churns through an individual message.

In my case, I created a single-thread ExecutorService and have my main loop immediately pass along all incoming commands to it. That way, the command runner is itself essentially synchronous, but your queue watcher can resume polling immediately. This keeps things responsive and avoids the potential case of the message queue filling up if there's a very-long-running task (though that's less likely here than if you're drinking from the EM fire hose).

#7: Really, Don't Do This

My final tip is that you should scroll back up and heed my advice from #1: it's almost definitely not worth writing a RunJava addin. This is a special case because a) the goal of the project is to essentially be a server addin anyway and b) I was curious, but normally it's best to use the HttpService route if you need a persistent task.

It's kind of fun, though.

Targeting Domino for Webapps Incidentally

Feb 11, 2020, 5:26 PM

Tags: java maven

I recently had occasion to break ground on a new web project that uses a Notes runtime and has a web front end, and I figured it would be a perfect occasion to structure it in a way that is clean, portable, and, while it will run on Domino, doesn't have to use Tycho.

I ended up coming up with a setup that I'm pretty happy with, and so I put up an example on GitHub for anyone else to use as a reference for similar cases.

What Is This, Specifically?

This is an application that consists of a couple main concepts:

  • Maven for project structure and dependencies
  • Core "plain Java" module that contains code that's intended to be portable and doesn't even know it's in a web app
  • JAX-RS-based REST API
  • Client JS web UI written in Stencil and transpiled with Node
  • Standard webapp project for JEE containers such as Liberty
  • Domino project to wrap the app up as an OSGi bundle

What this is specifically not is an XPages project. And, while it can use a Notes runtime and access NSFs, it's also not something that will be stashed inside an NSF, and the "Notes" part is optional and really only included here to show it's possible. The idea is that this is a standard web app first and a Domino thing second.

Project Structure

The project is organized as a Maven module tree like so:

  • domino-webapp: The parent container project just for configuration
    • core
      • webapp-core: This is the main place for UI-independent business logic
    • web
      • webapp-api-jaxrs: This contains the JAX-RS-based REST API, which exposes the core business logic to the web
      • webapp-webui: This contains a Stencil-based JavaScript app. It doesn't need to be Stencil specifically, or even NPM-based at all, but I find Stencil to be a pretty good choice for this
      • webapp-jee: This is the JEE-container web app, containing very little code of its own and just intended to output a WAR
    • domino
      • webapp-domino: This is the Domino equivalent to the previous project, but contains a chunk of adapter code to get things working, plus some Maven configuration to generate an appropriate OSGi bundle
      • webapp-dist-domino: This is a distribution project that pulls in the Domino OSGi bundle and creates a p2 repository, and then a "site.xml" file for the benefit of importing into an NSF Update Site

How the OSGi Part Works

In going deeper into what's going on, I'm going to start at the end: how to go from a normal web app to a Domino-friendly OSGi bundle. If you're not familiar with what I mean by "web app" in general and in a Domino plugin in particular, it's the sort of thing that Sven Hasselbach wrote a series about a few years back: a Java/Jakarta EE Servlet application using the "WebContainer" extension point in the Domino HTTP runtime.

Traditionally, these projects are built as plain-old Eclipse projects, where you drop a bunch of JARs for your framework of choice into a plug-in project and write your code in there, using Eclipse's Plug-in Development Environment. This works well enough as far as it goes, but puts constraints on how you do development, in particular pretty much requiring Tycho if transitioned to a Maven structure, which would then have massive penalties for the rest of your project.

Fortunately, the thing about an OSGi bundle is that it's really just a JAR file with special metadata, and so it doesn't actually have to be created with a toolchain that has full knowledge of OSGi. As long as the required files end up in the right places inside the JAR (which is in turn just a ZIP file), you're good to go.

In this case, I used the maven-bundle-plugin to decorate the "MANIFEST.MF" file with appropriate OSGi metadata and, importantly, to embed all the compile-scoped project dependencies for me. That second part means that Maven will handle the job of steps 7-10 in Sven's example: it'll bring in the dependencies from Maven, copy them into the right place in the final JAR, and set up the Bundle-ClassPath header to point to them.

It's important to note the "compile-scoped" qualifier there. The Maven projects themselves also depend on a couple things that I know will be present on Domino already, namely IBM Commons, Apache Wink, the Web Container adapter, and Notes.jar. Though it'd probably work if I copied those into the JAR, that would be asking for trouble unnecessarily, so I mark them as "provided" in Maven, and then the bundling process knows to skip over them.

The other OSGi-specific element is the "plugin.xml" file, used by Domino's Equinox framework to identify that the bundle provides a web app. In this case, I put that file in "src/main/resources", where it ends up being copied to the root of the JAR. One down side here is that you have to know ahead of time what the syntax for this file is: since Eclipse won't know this is a plug-in project, you won't get the GUI shown in Sven's example.

There are some other Domino-specific considerations, but I'll return to them later. For now, those parts will cover the OSGi "bridge".

Core: Using the Notes API

The core project doesn't have a lot going on, and that's intentional. It does, though, demonstrate how you can use the JSON-B API for JSON serialization and the Notes API for accessing NSFs and other Notes stuff.

The important parts happen in the project dependencies. The first one is simple: I want to use the JSON-B API, but I was to declare that it will be provided one way or another by the environment. The second one includes Notes.jar by way of my P2 Repository Provider since it's still not available as a normal Maven dependency.

This project contains a single class, which just gathers a bit of information about the runtime environment to be shown as a JSON object. The important part here is my use of NotesThread when calling the Notes API. Since this project can run on non-Domino containers, I can't assume that all threads will already be Notes-friendly, so I use that route. You can also call NotesThread.sinitThread() or go other ways, but I like containing the calls into a separate thread outright in simple cases.

JAX-RS

The JAX-RS project is intended to contain JAX-RS configuration and resource classes, and the immediate part to note is once again the dependency set. Here, I targeted specifically JAX-RS 1.1, which is quite old, but is provided by Apache Wink on all Domino installations. I could theoretically bring in RESTEasy for a newer spec version, but 1.1 is capable enough for now and it keeps things simpler.

In the Application implementation class, I enumerate all of the resource classes used in the app. This is equivalent to the text-file-based method common in Wink apps, but it's portable across JAX-RS implementations and has the side benefit of being compiler-checked. However, though it's a step up from the old Wink way, it's a big step down from the modern JAX-RS way: in newer containers, you can just let the container find your resources by looking for classes with annotations automatically. However, that doesn't fly on Domino and, while you can hack in something roughly equivalent, it's simpler for now to just enumerate the classes explicitly and remember to add them to this list.

There are only two resources here: a Hello World resource and one to ferry the ServerInfo object out using the JAX-RS environment's JSON serializer (more on that in a bit).

The Web UI

The web UI project is complicated, but mostly because NPM-based JavaScript development is complicated. This example uses Stencil, which I quite like, but you can use whatever you'd like: React, Angular, just plain ol' HTML, or whatever.

The important parts here are the use of frontend-maven-plugin to create a Node+NPM environment and build the app and the specific configuration to put the output into "src/main/resources/META-INF/resources". Doing this means that, when this project is wrapped up into a Java-less JAR file, the web resources will be in the "META-INF/resources" directory, which is special on Servlet 3 and above. Any files in there in dependency JARs like this will be visible as if they were in the main web content of your web app.

JEE App

The Jakarta EE app is the simplest of the bunch, and the only actual class in there only exists for example purposes.

The work, such as it is, all happens in the Maven configuration. I declare it to be war-packaged, to not complain if there's no "web.xml" file, to bring in the project dependencies, and to specifically include IBM Commons. It also brings in Notes.jar as a compile-time dependency.

The Domino Shims

Back in the Domino module, it's time to talk about the non-OSGi parts. I've mentioned a few things above that require no configuration in a modern web container, but which will require a bit of legwork in Domino. These are generally related to the fact that Domino's servlet container is version 2.4 and it has no idea about newer standards.

  • I bring in a Eclipse Yasson dependency to provide JSON-B support.
    • To bind that to JAX-RS, I wrote a Provider class that knows how to turn any Java object into JSON when a resource says it wants to output JSON.
    • To register that provider (since it can't be picked up automatically), I subclass the Application class to include it specifically.
  • The ResourcesServlet servlet mimics the Servlet 3 behavior of serving resources out of "META-INF/resources". This specific implementation isn't the best, since it doesn't provide any caching, but it gets the job done and means that the web UI JAR will work the same way on both targets.
  • The RootServlet servlet extends the Wink default REST servlet to shim the ClassLoader around, which avoids a lot of trouble with threads used for web app requests that had previously been used for XPages requests (it's annoying, trust me).
  • I have to include an explicit reference to Wink's JAX-RS provider for some reason to do with bundle class loading.
  • Unlike in the normal web app project, I have to include a "web.xml" file, and this one registers the two servlets above.

Domino Update Site

The second part of the Domino target is the distribution project, which uses the p2-maven-plugin to create a P2 repository. That plugin is a splendid tool for your toolbox and has a lot of capabilities for auto-OSGi-ifying otherwise-non-OSGi projects. In this case, I just want to include the Domino project from the previous step, but I also want to generate an Eclipse feature for it so that it can be imported into an NSF Update Site and with some proper metadata.

I also use the p2sitexml-maven-plugin, which takes the newer-style P2 site generated by the previous step and adds a "site.xml" file, which is needed by the NSF Update Site import process if you want to include categories, which I think are nice.

Seeing It In Action

To run the app on Domino, you can do a Maven install on the root, install the update site from the distribution project onto Domino, and then visit "/exampleapp/". You'll be greeted by a vision of beatuty like this:

Example Webapp Screenshot

Placeholder garishness aside, it shows the Stencil app loading, using the custom favicon, and making a call to the System Info service. That, in turn, shows using the Notes runtime to get the server's distinguished name. It's left as an exercise for the reader to then put in the thousands of hours of work to make a world-class application.

Caveats!

Since this is a Domino thing, there are important caveats.

The first is one I mentioned earlier: because we're restricted to Servlet 2.4/2.5ish, a lot of things just won't work. Indeed, not even all of the 2.4 spec works, as Filters aren't implemented for some reason. Additionally, outside of Servlet and JAX-RS 1.1, you're pretty much in "BYOB" territory when it comes to other JEE specs. In this example, I brought in Yasson for JSON-P and JSON-B and that was pretty simple, but others (say, CDI) would require a lot more fiddly work.

There's also an extra-special caveat when it comes to JSP. Domino's web container knows about JSP, but requires what it calls a "JSP compiler bridge": a special extension that allows for interpreting JSPs inside the special environment it creates. However, it doesn't actually ship with such a bridge. Notes does (and MyFaces too) for what I assume are "social" reasons, but Domino doesn't. You could probably nab the JSP stuff from Notes and drop it onto Domino, but you'd be getting into weird territory. I tried dropping Jasper into the app, but it ran into ClassLoader-casting trouble... hence the bridge, I guess.

Usefulness

Phew! Admittedly, it's a long walk to get to the point where you can just run a web app, and there are quicker ways to get there. However, I do think this is worth it. With this setup, I have a set of Maven projects that work swimmingly in Eclipse and any other Java IDE, a NPM project that acts like any other, and a JEE container front-end for rapid development. No Designer, no NSF syncing, no Plug-in Development Environment, no Tycho. And, though I don't have the full breadth of JEE available to me, JAX-RS is the main one you need for a client-JS app anyway. It's not an appropriate setup for every app, but it's really nice when it fits.

Domino 11's Java Switch Fallout

Jan 7, 2020, 10:50 AM

Tags: java

In Notes and Domino 11, HCL switched from using IBM's J9 Java distribution to using the OpenJ9 variant of AdoptOpenJDK. This is a lateral move technically - it's still Java 8 - and it's one presumably made in the short term to avoid licensing costs from IBM and in the long term to align better with AdoptOpenJDK.

However, OpenJ9 is not the same as J9, and AdoptOpenJDK is not the same distribution as the previous one, so there are some minor gotchas to look out for.

BASE64 and Other Internal Classes

A couple months back, I wrote a post describing this situation: namely, that some XPages and agents grew to depend on the presence of JVM-internal classes in the com.ibm namespace, particularly com.ibm.misc.BASE64Encoder and its decoder sibling.

The true fix for this is to ferret out uses of these classes in your code base, but that can be difficult. If you have to maintain legacy code, I made a small shim Jar you can drop on your server to map the two BASE64 classes to their sun.misc versions. I intentionally use those classes, even though they're also not for public use, both because they have the same semantics as the IBM ones and to reinforce that the best solution is to use the vendor-independent java.util.Base64 class.

java.pol

It's been fairly-common practice for a little while now to create a file named "java.pol" in the Java installation directory to loosen the security policy and get around Domino's bizarrely-strict interpretation of the rules. This came into vogue in favor of editing "java.policy" because this file was (usually) not overwritten during Notes/Domino version upgrades.

However, as Per Lausten discovered, AdoptOpenJDK's distribution does not reference this file, and so its policy changes won't take effect. The upshot of this is that there are three main options to loosen the policy:

  • As Per mentions (via Daniele Vistalli), you can create a file named ".java.policy" in the home directory of the user running Domino and it will be honored.
  • You can go back to editing the "java.policy" file, and re-editing it with each new release
  • You can modify "java.security" to reference "java.pol" again. This is kind of a wash, though, since you'll need to re-edit "java.security" every update anyway

Different Implementation Jars

This last one is much more limited in scope, and may actually be limited in effect to just the NSF ODP Tooling project. In that project, in order to create a Domino-compatible runtime environment for local compilation, I included a couple expected Jars from the Notes/Domino installation in the runtime's classpath. One of these was "ibmpkcs.jar", which covers both some security stuff but also the aforementioned BASE64 classes.

The fix in my case was to just make the resolution of that Jar optional, which should work for the normal case, but it'll be something to keep an eye on in the future.

Winter Project #2.5: XML Schemas for XPages

Jan 2, 2020, 10:59 AM

Tags: xml xpages

After I did my initial port of my XSP completion assistant to LSP4XML, I got to thinking about improvements I could make to it. One of these was the notion of creating XML Schemas for a project's XPages, similar to how the DXL contributor just passes along the schema files that ship with Notes and Domino.

Doing the same with XSP isn't nearly so easy, and comes with a good number of gotchas that make it not only impossible to fully validate an XPage with schemas, but also difficult-to-impractical to generate such schemas on the fly outside of Designer. But first, two asides!

Aside #1: Mechanisms of Content Assistance

At its core, doing content assistance in a text editor is essentially a matter of the editor saying to the assistance plugin "I have a file here, with this content, at this location, and the user's cursor is in this spot. What should I suggest as the next part to type? And, once they've typed it, can you tell me if it is valid?".

In the simplest case, this could be something like a dictionary of words when writing prose. A content assistance plugin for English-the-language could merely consist of a list of known words, and it would take a prompt from the editor of "ca" and suggest "car", "cat", and so forth. There's an upper limit to what that sort of thing can and should do, and just doing that would probably suffice.

Programming languages are more complicated, and often the best route for autocomplete is to basically also be a whole compiler with full knowledge of the language and the structure of not only the current file, but also of other files in the project and all the dependencies. That way, an IDE can suggest types, method names, parameters, and all sorts of complicated external notions.

XML/HTML content assistance is often somewhere in between there. In both the case of my original XSP implementation and the LSP4XML port, the route I took was to let the in-between layer handle knowledge of raw XML mechanics like tags and attributes, but then basically provide a dictionary of known words when asked. So, when the editor said "the user typed xp:v", my code searched through all of its known components and responded with "maybe xp:view or xp:viewPanel".

I decided over the last couple days to take another route, based on the fact that XML is designed to be a toolkit for making fully-described and -validated markup.

Aside #2: XML Validation

XML itself is something of a meta-language: it has its rules, but it's intended to be a format used to describe other, more-specific grammars. On its own, it has the notion of whether or not a document is well-formed: this means specifically that it follows all the syntax rules of XML, like the proper use of brackets, attribute quotes, element hierarchy, and all that. Well-formedness is comparatively easy to enforce, and basically any XML editor does, but it doesn't say anything about whether a given XML document is a valid example of its kind.

That job is left up to a secondary definition, usually done via either a Document Type Definition file or an XML Schema, though there are more ways than that. These are the things that say, for example, that in XHTML the root element must be <html>, and that element contains zero or one <head> element and one <body> element, and so forth.

With the aid of a document schema, an XML processor (such as a structured text editor) can verify first that a document is well-formed XML and second that all of the elements that it can match up with the schema are valid. Moreover, it can itself maintain the list of potential elements, attributes, and values, and display them in a clean and fast way without the content-assistance plugin having to worry about parsing and substring matching.

That "...elements that it can match up..." caveat comes in to play because XML allows mixing grammars in a single file and the processor may or may not require that all of those grammars be strictly defined. What identifies these grammars in a file is the use of an XML "namespace", which is a URI that may or may not actually go anywhere. For example, the MathML namespace is "http://www.w3.org/1998/Math/MathML" and the SVG one is "http://www.w3.org/2000/svg". Both of those are URLs that resolve to pages, but that's just because the W3C is being nice; the only requirement is that they are unique URIs. In an individual document, these namespaces may be matched up to prefixes, and one can be defined as the base namespace for elements that don't have prefixes.

XSP as an XML Grammar

Which brings us to XPages. XSP-the-markup-language is XML-based and so every XSP document must be well-formed. Designer won't even give you the time of day if, say, you leave off the closing </xp:view> tag at the end of a file. Additionally, XSP has many of the trappings of a fully-validated XML language. It uses namespaces as its prime identifier to identify tags and you can't just write any old tag in one of those namespaces or give an existing tag some random new attribute. Moreover, many attributes and elements have special rules about their content: you can't put text inside <xp:this.data/>, and <xp:text disableOutputTag="foo"/> is illegal. These are all specialities of XML Schema.

XSP does not, though, have any schema. Most of the validation happens at build time (including of XML well-formedness, apparently), and some is even delayed until runtime (like that invalid boolean above). There are some good reasons for this. First and foremost is the fact that XSP really describes Java objects that are themselves contributed programmatically, by way of XSP library classes. Once your path to validity runs through arbitrary Java code, it means that you don't have the option to statically compare the file to a schema - you have to run that code inside your environment. Additionally, the XPages namespace ("http://www.ibm.com/xsp/core") isn't even defined in a single place, and has controls declared across several plugins. Same goes for the ExtLib's "http://www.ibm.com/xsp/coreex" namespace, and then there's the Custom Control namespace, "http://www.ibm.com/xsp/custom", which can't even be determined at a global level and has to be synthesized repeatedly on a project-by-project basis. And then, beyond all of that, XSP has rules that just can't be expressed in XML Schema at all, so Designer would have to have a secondary validator anyway, dampening the benefits of codifying a schema.

But Could It Work, Though?

Still, I figured that, if I could craft a schema that's good enough for basic use, I could get some extra completion and suggestion assistance while hopefully marking everything as vague enough to not run into trouble with the flexible nature of XSP.

My biggest ally here is that, while the core and ExtLib namespaces are technically open at any point, it would be such bad form for, for example, a company-specific library to declare its components as part of it that I can ignore that possibility entirely. In effect, for a given Domino release, those namespaces are sealed and fully describable ahead of time.

So I set out to see if I could make XML schemas to contribute to LSP4XML to give it some more knowledge of what it's working with. Like when I generated the JSON used by the original content assistance, the tack I took was to write a servlet that runs in an XPages context and emits the files I want based on a stock runtime. My initial version of this was too clever: XML Schema allows for subclassing, inheritance, and references, and I originally set out to have the schemas match the structure of the underlying component trees. I got pretty close on this, but ended up spending all my time fighting namespace collisions, and in particular the really-subtle one where the roots of the tree are actually defined in the "http://www.ibm.com/xsp/jsf/core" namespace, which is an IBM fork of JSF's "http://java.sun.com/jsf/core". As a side note, there are no concrete component definitions there, so you can't get any use out of it in an XSP file; it's just an interesting implementation detail.

The Current Results

When I took a step back and gave it another whack, I ended up coming up with something that pretty much works. I'm still not sure if it'll actually be the way I keep going, though. For one, there are currently a bevy of open bugs, some of which may end up being showstoppers. Beyond that, though, since XSP still requires a secondary processor to check validity, it may end up being the most reasonable route to just go back to offering completion ideas and maybe some post-processing to check.

Still, this is one of those types of projects that's worth it just as a learning experience. I hadn't even really looked at XML Schemas since around when they came out, and this sure was a good way to get a crash course. And, in the mean time, I think that the generated schema files are pretty-interesting artifacts.

Winter Project #2: Maven P2 Repository Resolver

Dec 28, 2019, 2:12 PM

The second project I took on this past week was related to the first, and also relates to my ongoing struggles with Tycho.

While working on the NSF ODP Tooling, I figured that it could be a good candidate to move away from Tycho and to maven-bundle-plugin or bnd directly. Since I've been Mavenizing the Domino OSGi bundles for a good while now, and the tooling doesn't have any OSGi-dependent tests in it, it seemed like it could go smoothly. Unlike historical precedent, though, my enemy in this endeavor wasn't Domino, but rather Eclipse.

Repository Layouts

One of the big things that makes working with OSGi bundles - at least specifically ones in the Eclipse style - difficult with Maven is that they're generally provided using a repository layout called "p2". This is the evolution of the "site.xml" Update Site style and shares a lot of characteristics. In fact, a p2 repository will often have a "site.xml" file alongside its "artifacts.xml" and "contents.xml" (often Jar'd up) to provide backwards compatibility. It's how we package up XPages plugins and how once upon a time IBM provided the XPages artifacts for Tycho use. As a live example, Eclipse 2019-12 is distributed via such a repository.

Maven has its own repository layout, variously called "Maven", "Maven2", "m2", or just "default". This serves a similar purpose, but is structured differently - whereas p2 just has the repo and its "features" and "plugins" directory (and, potentially, composite repositories) - Maven's repository system is organized like a conceptual folder tree based on translating a Group ID (like, say, "org.openntf.maven") into a successive series of subdirectories (like "org/openntf/maven"), followed by a directory for an Artifact ID, which in turn contains directories for each version, and finally within there are any of the actual files that make up a given named "artifact". As a live example, Maven Central is browsable in this manner.

Translating Between Them

Though the two layouts share a common core job - hosting Jar files (mainly) - they diverge enough in how the tools expect the metadata to be laid out that they're difficult to mesh. Tools like bnd can often work with whatever, and even Tycho can try to find OSGi bundles via Maven dependencies, but it's not smooth.

Over time, a pseudo-standard of adding p2 repositories to Maven has emerged, but it's only actually used as a marker to pass along to true Eclipse tools. The most common in our sphere is this construct, seen in projects like ODA:

1
2
3
4
5
<repository>
	<id>notes</id>
	<layout>p2</layout>
	<url>${notes-platform}</url>
</repository>

There, you reference the XPages update site somewhere, and then Tycho can use that to resolve dependencies for things like Require-Bundle: com.ibm.xsp.core. However, it's used only for Tycho's specific OSGi needs. You can't bring in the Tycho plugins and then have a non-Tycho module in your tree declare a dependency like that. Tycho has an implementation class for this, but it's intentionally stubbed out.

My Needs

The reason why tools that can work with both are so heavily slanted to the specific task of generating a true OSGi environment is that that's usually what you want. If you're designing, say, an Eclipse plugin or Eclipse-derived product, you want all of your tooling to know about OSGi from top to bottom, and that's where Tycho excels. It makes sure that all of your dependencies are correct and everything is OSGi-friendly.

This is as opposed to something like maven-bundle-plugin, which is most typically used to put an OSGi coat of paint on a project that isn't primarily geared towards OSGi.

I kind of want a middle ground, though. A project like the NSF ODP Tooling has grown into a sprawling hydra, with heads for Maven, Eclipse, Domino, and now Visual Studio Code. While that works, putting Tycho at the front of it ends up feeling needlessly proscriptive, and I'd love to toss it aside. However, I was blocked in my desire by a small thing: though Eclipse publishes their core bundles on Maven nowadays, Wild Web Developer is currently p2-only.

The Project

So I set out to solve this problem for myself and learn something in the process. As indicated by the <layout>p2</layout> option on the repository above, Maven's repository system is intended to be extensible. Unfortunately, it seems like it hasn't been extended particularly often in practice, and most of what I could find about it was that it's possible to do, but only via references to people saying that one could.

Fortunately, it turns out that it's actually not too difficult to implement after all, and I did just that.

What this Maven plugin does is allow you to specify p2 repositories in any old Maven project, using the ID of the repository you add as the Group ID of dependencies and then the Symbolic Name as the artifact ID. Now, with the plugin added to the project, I'm able to reference the p2-only Wild Web Developer artifact I need:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
<repositories>
	<repository>
		<id>org.eclipse.wildwebdeveloper</id>
		<url>https://download.eclipse.org/wildwebdeveloper/releases/0.8.0/</url>
		<layout>p2</layout>
	</repository>
</repositories>

<dependencies>
	<dependency>
		<groupId>org.eclipse.wildwebdeveloper</groupId>
		<artifactId>org.eclipse.wildwebdeveloper.xml</artifactId>
		<version>[0.5.0,)</version>
	</dependency>
</dependencies>

And, just like that, the bundle and its explicit dependencies show up in my Maven Dependencies group in Eclipse:

p2 Maven Dependencies in Eclipse

As a side bonus, this obviates the need for the mavenizeBundles half of the generate-domino-update-site project, since now I can just reference the generated site directly and get the dependencies, including with better behavior for embedded Jars than I had there:

com.ibm.xsp.core Dependency in Eclipse

The Tiny Details

I think that this plugin is about where it needs to be to suit my needs, but there is still an array of tiny details I've yet to contend with. It'll never be quite a perfect match for an arbitrary OSGi bundle (though some also contain useful Maven metadata), and so there will always be rough edges with something like this, but I think that it will solve a lot of headaches I'd otherwise have to deal with down the line.

If you think it'd be useful for your projects, take a look and let me know if you run into any trouble.

Winter Project #1: XPages LSP4XML Extension

Dec 27, 2019, 4:27 PM

Tags: nsfodp xml xpages

The last couple weeks of the year are always a good time to work on some side projects or small utilities to scratch an itch, and this year definitely ended up that way, seeing me work on a couple interesting things over this Christmas week.

The Project

The first of these is an enhancement to the NSF ODP Tooling project. Among the various components that I've put in there over time is an Eclipse content assistance plugin that provides some autocomplete capabilities when working with XPages and Custom Controls within an ODP. Specifically, it knows about the stock and ExtLib controls that ship with Domino, as well as any Custom Controls within the same project, and it allows Eclipse to provide completion suggestions. The code in there is actually hairier than you might think, and that's because it's building on top of Eclipse's generic text completion system, with only a little assistance from an existing implementation I cribbed. Still, it does the job.

The Problem

However, most of my XPages development nowadays takes place first outside Domino and then only ends up back on Domino when I want to make sure it works.

The trouble there is that the content assistance plugin I wrote is tied to both the ODP nature (an Eclipse-ism meaning that it knows a given project is associated with a set of capabilities) and the specific layout of an on-disk project.

The Quick Route

I originally set out to just loosen up that association a bit - take the existing plugin and allow it to work with any .xsp file and to try to find custom controls in a more webapp-type layout of a project.

However, I figured this was a good option to look beyond that. Though the plugin does indeed do what I want, the trouble is that it's thoroughly tied to Eclipse specifically. All of the classes use Eclipse core and XML tooling classes extensively and none of that would be portable to any other IDE. I figured this would be a perfect time to jump into the world of the Language Server Protocol.

Aside: What is LSP?

The Language Server Protocol is a standard for a way to provide IDE-type support in a way that's not dependent on any specific IDE. It grew out of Visual Studio Code and is gradually seeping its way across the whole development landscape.

It allows for the specifics of handling a language - checking validity, resolving classes and other entities, identifying keywords, and so forth - to be separated from the IDE used for editing it. Using LSP, if an IDE wants to support editing, for example, JavaScript, its creator no longer needs to create and maintain a tool to handle all of the intricacies and changing rules of the language - instead, it can bring in the LSP implementation for it and then focus just on the specifics of what the IDE does differently from others.

Additionally, this decoupling means that the LSP implementation doesn't even have to be written in the same language as the IDE using it, and are often just written in the same language that they're implementing. If you look across the list of implementations, you can see that the Swift server is written in Swift, the Ruby one in Ruby, and the Java one is actually Eclipse's Java tooling extracted from the IDE.

Wild Web Developer

Since Eclipse long predates LSP, it's historically had its own implementations for any languages it supports. Though it's had enough clout to support a lot of languages, some of them have trailed behind. Like, really far behind. Prime among these have been its web-language-related editors, which do an okay-enough job editing basic HTML, CSS, and JavaScript, but pretty much missed the boat on newer features, transpiled languages like TypeScript, and project structures like NPM. While there have long been plugins for Angular and TypeScript, they never fully kept up and the whole thing ended up falling far, far behind other IDEs like VS Code.

Enter Wild Web Developer, the whimsically-named project to bring the fruits of the LSP development to bolster Eclipse's web-tech support. Though it's named for its web language implementations, what it really is is a combination of two things: a generic text editor backed by a small array of LSP implementations and a syntax-coloring system derived from TextMate, which itself became a pseudo-standard for syntax coloring.

LSP4XML

That brings us to the piece that ties together LSPs and my immediate desires: LSP4XML, the most-popular XML Language Server implementation, which is used by both VS Code and Eclipse, and just so happens to be written in Java and is designed to be extended.

Since LSP4XML is so smoothly extensible and Wild Web Developer just added a way to contribute these extensions in Eclipse, that meant I could accomplish what I want without having to worry about writing a whole LSP implementation just to support XSP and DXL.

XSP Completion Participant

Contributing to an LSP4XML server involves creating an extension class that then registers the individual capabilities you want to provide.

In this case, I contributed an ICompletionParticipant implementation. ICompletionParticipant has a delightfully-straightforward API, and all you have to do is provide tag, content, and attribute suggestions based on the context the user's cursor is in.

With this simpler API, I was able to significantly refactor down my earlier implementation, making it much more readable and focused.

DXL Schema Contributor

The other piece of XML completion that I added to the NSF ODP Tooling was to provide the (blessedly-redistributable) DXL schemas that ship with Domino to Eclipse. Unlike the XSP completion assistant, this plugin is entirely code-free, consisting solely of the schema file itself and an extension contribution in plugin.xml. The reason this works is that each DXL file declares its XML namespace at the top of the file, and so I can tell Eclipse to look for the schema file I'm providing when editing DXL.

LSP4XML also provides a way to provide schema files, but it's a little more complicated, involving a resolver class implementation. The idea is the same, though, mapping the namespace to the DTD file.

In both cases, the standardized and descriptive nature of XML schemas means that merely providing them to the IDE allows for all sorts of code assistance, even down to the level of suggesting and validating attribute values. It's pretty great.

Side Benefits

Making this switch to LSP4XML accomplished my original goal: by changing the XSP handling in the NSF ODP Tooling for Eclipse, I switched over to the Wild Web Developer editor and got editing in .xsp files anywhere (and a bit snappier to boot, since it's inherently heavily multithreaded).

But, like I mentioned, Language Servers are used across IDEs, most notably Visual Studio Code. Thanks to VS Code's equivalent LSP4XML extension mechanism, I was able to contribute the same extensions used for Eclipse there, and get the same type of results. That's a far cry from being able to get all of the NSF ODP Tooling capabilities outside of Eclipse, but it's a big start.

The Next Version

Currently, these additions are just in the develop branch of the tooling and haven't made their way to a proper release yet, but they've proven themselves so far in my use. My hope is to make a few more improvements, get the VS Code extension into shape, and make it part of a "3.0" release of the Tooling.

Small Aside: Writing Agents With Java 5+ Features

Nov 26, 2019, 10:46 AM

Tags: designer java

The topic of using Java 5+ language features in agents came up recently in the OpenNTF Slack room (use this invitation link if you haven't already joined!), and I think that it's one of those topics that's worth making a post about for posterity's sake.

For historical and compatibility reasons, the default compiler language level for newly-created agents, at least in the absence of the JavaCompilerTarget notes.ini setting, is Java 1.3:

Java Agent Compiler Properties

This is laughably out-of-date, and doesn't even support 15-year-old features like generics. Accordingly, if you take a piece of code from my last post, you'll end up with a couple errors:

Java agent compilation problems

It recognizes the Java 7+ classes, since it's still backed by a Java 8 JDK, but it complains about try-with-resources and the use of varargs.

Setting the Compiler Level

The fix for this is to change the compiler compliance level for your agent project. Depending on the type of problem, you may be given an Eclipse quick fix option if you hover over the red-underlined text:

Eclipse quick fix

If you don't have that option (for example, there's no such option for the Files.createTempFile vararg problem), you can alternatively go to the "Package Explorer" view, find the temporary project for the agent (named something like "foo.nsf.Some Agent.ja"), and go to Properties. In there, you can go to the "Java Compiler" settings, make sure the "Compiler compliance level" is the level you want, and then check "Use default compliance settings":

Project compiler at 1.8

When you do this and next save the agent, you'll be greeted with this Designer-specific message box:

Java compiler backward compatibility warning

This is good to see, since it shows that Designer picked up the change and will write it into the agent note in items named $JavaCompilerSource and $JavaCompilerTarget. That's the part that really counts, since it's what Designer uses to re-construct the Eclipse project in the future.

Note, though, that the title of the message box is a reminder about an important aspect: if you set the target compiler level to 1.7 or above, then your agent will not run on Domino servers before 9.0.1 FP8. So, if you're using one of those for some reason, take care about the language features you're using. The other breakpoints, if you're working with some really-legacy servers, are Java 1.4 being added in 7, 1.5 added in 8, and 1.6 added in 8.5.

Dealing With Errors

Depending on your Notes version (I suspect this problem cropped up in 9.0.1FP10 and seems to be gone in the V11 beta), changing your compiler level may result in "forbidden reference" errors for basic things like the lotus.domino classes.

If you hit this, the quickest fix is to modify your settings in Designer's preferences, in the "Java" → "Compiler" → "Errors/Warnings" section:

Setting to ignore forbidden references

If you set "Forbidden reference (access rules)" in the "Deprecated and restricted API" section to "Ignore", the problems should go away. I'm not actually sure why this problem crops up (perhaps something to do with the structure of Notes.jar), but it was fairly consistent for me for a while.

With all that set, though, you should be free to use up to Java 8 features in agents (and web services, probably) to your heart's content.

Writing a Java NIO Filesystem, Part 1

Nov 25, 2019, 10:09 AM

Tags: java java-nio
  1. Writing a Java NIO Filesystem, Part 1

My recent project, the NSF SFTP File Store consists of two main parts: the SFTP server itself (powered by Apache Mina) and a Java NIO filesystem implementation. I casually mentioned the latter in my introductory post, but I think it's an interesting topic that warrants a post or two of its own.

Introduction to NIO

The term "Java NIO" refers to the non-blocking IO package added in two parts across Java 6 and 7 and can be considered a refresh of previous capabilities in a similar way to the Collections API in Java 2 replaced the handful of original collection classes.

The initial (and larger) part of it added in Java 6 added better capabilities for dealing with all sorts of "byte stuff": buffers of arbitrary types, smoother character-set handling, more-flexible streams, and so forth. I dealt with these constructs a while ago with the "nsfdata" package in ODA. In that case, it proved very useful for dealing with in-memory representations of Notes Composite Data, which is best dealt with as a navigable array of differently-sized structures.

The java.nio.file package was added in Java 7 and is a refresh of the older java.io.File/java.io.FileOutputStream/etc. system. It interacts well with the previous NIO stuff, but can actually be thought of as a distinct thing, despite its shared naming. This package changes around the mechanics of the original filesystem API, and I remember finding it kind of grating when I first encountered it. It's all in service of being more flexible, though, and it's one of those things where repeated exposure ended up making the older API feel weird and wrong.

Using the NIO Filesystem API

Before I get to the actual implementation of a backing filesystem, I think it'll make sense to show some examples of using the NIO filesystem API, especially since these can (and should) be used in any Java-based application today.

When the new classes were added, the older File class was augmented with a method to convert it to a java.nio.file.Path and vice-versa:

1
2
3
File foo = new File("/some/path/to/file");
Path fooPath = foo.toPath();
File fooFile = fooPath.toFile(); // for older-API interoperability

The Path interface is a very-lightweight and implementation-neutral representation of a path. Unlike the File class, it doesn't have methods for actually interacting with the filesystem. In fact, pretty much all you can do with it specifically is get other Paths based on it:

1
2
3
Path foo = Paths.get("/some/path/to/file");
Path parent = foo.getParent();   // "/some/path/to"
Path child = foo.resolve("bar"); // "/some/path/to/bar"

The primary way you use Path objects is via the Files utility class, which provides not only the methods you may be familiar with from File but also some additional ones like getting input and output streams:

1
2
3
4
5
6
7
Path foo = Files.createTempFile("test", ".tmp");
try(OutputStream os = Files.newOutputStream(foo, StandardOpenOption.TRUNCATE_EXISTING)) {
  os.write("hello".getBytes());
}
try(BufferedReader r = Files.newBufferedReader(foo)) {
  System.out.println("file contains " + r.readLine());
}

Here, you can see a couple things going on:

  • Files.newOutputStream replaces FileOutputStream, and so the object you assign it to should be declared as just OutputStream.
  • Many of the Files methods take zero or more open/move/etc. options, and they can be a little odd at first. The method parameters are declared as e.g. OpenOption, but the actual options are available in the StandardOpenOption enum. This is to allow for arbitrary extensions for custom filesystems. For example, you might write a custom option to force creating a backup version of the file before writing.
  • I'm using the try-with-resources syntax from Java 7 here. That isn't actually related to NIO except in that it came in the same release, but it's great and you should use it.

The Files class also contains methods for listing files and walking file trees, which operate with Streams (as of Java 8) and callbacks. For example, this bit from the NSF ODP Tooling walks a file tree using the callback method and stores it in a ZIP file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
try(OutputStream fos = Files.newOutputStream(result)) {
  try(ZipOutputStream zos = new ZipOutputStream(fos, StandardCharsets.UTF_8)) {
    zos.setLevel(Deflater.BEST_COMPRESSION);
    Files.walkFileTree(path, new SimpleFileVisitor<Path>() {
      @Override
      public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
        if(attrs.isRegularFile()) {
          Path relativePath = path.relativize(file);
          String unixPath = StreamSupport.stream(relativePath.spliterator(), false).map(String::valueOf).collect(Collectors.joining("/")); //$NON-NLS-1$
          ZipEntry entry = new ZipEntry(unixPath);
          zos.putNextEntry(entry);
          Files.copy(file, zos);
        }
        return FileVisitResult.CONTINUE;
      }
    });
  }
}

Next Up: Implementation

In my next post, I'll get into some specifics of implementing a filesystem using this framework as well as some of the implications of how flexible and straightforward it is.

Options for the Future of the Domino Open Liberty Runtime

Nov 18, 2019, 10:57 AM

Tags: java liberty

I've been thinking lately about what I can do with the Open Liberty Runtime project. If you're not familiar with it, the gist of it is that it gloms the Open Liberty Java/Jakarta EE app server onto Domino to allow deploying up-to-date WAR-based apps "to Domino", sharing the server's access to NSFs.

In its current form, it's serving me reasonably well: this blog is running on it, for example, and writing the SFTP server to target it has been a delight. I've been tracking a set of issues about it, mostly relating to improving the administration/deployment side as it is in ways that would improve my current use. There are some more-out-there ideas like running apps from NSFs, but for the most part it's been focused on "a JEE server attached to Domino".

Slightly-Tweaked Model

But what I've been thinking about lately is tweaking the model a bit to be less like "a new HTTP runtime" and more like a series of related servers, which would then be proxied to. This is in line with what Liberty has been trending towards anyway. Though it started out as essentially a cleaned-up WebSphere monolith, the fact that it's been targeted at cloud-service use has meant that it's been aggressively tuned to allow you to include only the specific features you need, with the idea that there will be many instances of Liberty running, one per app. This is also seen in their relentless push for faster startup times, something that only matters if you plan to start up instances of the server very frequently.

My Runtime project technically supports this model currently. Though it uses a single Liberty runtime download, the main Servers list is geared towards creating multiple running servers with their own app sets, which end up being distinct processes:

Open Liberty Admin Servers view

So one could hypothetically put dozens of servers in here and use it as I've been describing. Other than providing that capability, though, the tooling doesn't really help you much: in particular, it would do nothing to alleviate the problems of conflicting port mappings or unifying all of these apps into a single front. I did a little work making a reverse-proxy webapp, but it's really meant for the case of having one Liberty server with all of your apps, and wouldn't have any meaning in a many-server setup.

One option I've been considering here is giving the server side of this some knowledge of and control over the HTTP ports used by the individual Liberty servers, so that it could assign random ones automatically. Then, I'd also be able to add on a dynamically-configured reverse proxy that would know about these different running servers and could route requests automatically.

Other Runtimes

On top of that, there's nothing about the project that's really inherently tied to Open Liberty as such. It currently assumes that that's the runtime, but the only way it interacts with it is by pouring files out to the filesystem and then running some known shell scripts. There's nothing stopping such a setup from also gaining a bit of knowledge about, say, Node, Swift, or any number of other runtimes. Things would get progressively weirder the further you get from Java when you want to access the Domino runtime, but hey, that's what the C API is for!

Reinventing the Wheel

While mulling this over, though, I did realize one thing: I'm essentially describing a jankier version of Kubernetes. For the most part, the problem of running disparate apps with their own runtimes and needs managed by a central server is solved by that and Docker. This project would be a bit different in that the apps would automatically inherit a Domino runtime and would also (usefully) maintain clustered/replicated configuration by way of the Domino server backbone, so it's not exactly the same thing. And, while HCL is pushing for a Dockerized future, the pieces aren't there yet and won't fully be for a while: while Domino-on-Docker is a thing technically, "many Notes-runtime apps with many server IDs" for NRPC use is a licensing minefield, and the gRPC bindings are currently painfully limited in both capability and language support.

So I likely would be best off just letting time (which is to say, HCL) solve the fancier problems and just focusing on the Runtime's original job of being a better Java app server for Domino. I do think it would be useful to better support the one-server-per-app approach, especially for a hypothetical case of wanting to deploy an XPages app in one Liberty server but then a JSP or JSF app in another, and that'll take a better reverse proxy.

I think it's good to mull over, though. This already provides a good path to better app dev with Domino, and smoothing that out more will make my life all the better and will hopefully be useful to others.

Quick-Tip Thursday: Avoid Future Base64 Trouble In Java

Nov 14, 2019, 9:35 AM

Tags: java sntt

TL;DR: Don't use com.ibm.misc.BASE64Encoder or com.ibm.misc.BASE64Decoder.

If you've been doing Java development for a while, you've likely run into a situation where you need to Base64-encode or -decode something. It's used commonly in MIME documents, data URIs, and various other areas typically related to data transfer.

Unfortunately, using Base64 in Java was awkward for a long time, with no natural choice in the core JDK until Java 8. The trouble over the years, though, is that there are choices in previous releases, and it became somewhat common in the community to use sun.misc.BASE64Encoder for this need. The problem with that is that sun.* packages are officially not part of the JDK and can't be relied upon to exist in every Java implementation. Unfortunately, a combination of a lack of enforcement of this in the tooling and the fact that those classes are in practice pretty universal has let them creep into Java code.

On the Domino side, we have a second problem: com.ibm.misc.BASE64Encoder.

Sidebar: The Different Flavors of Java

For the most part, setting aside Android, it's safe to think of "Java" as a monolithic entity, where having a JVM of a given version number on one system will be equivalent to the same on another. There are subtle differences, though, and several vendors have long maintained their own variants of Sun/Oracle's JVM (which is named HotSpot).

IBM has maintained one such variant, called J9, and infused all of their Java-using products, Domino included, with it. J9 has since been open-sourced and donated to Eclipse and, renamed to OpenJ9, has carved out a niche for itself by being particularly lean and speedy to spin up.

The Snag We Hit With J9

For the most part, the differences in JVM flavors don't matter to developers, but IBM played a bit of a trick on us by making duplicates of some sun.* classes under the com.ibm.* package. Unlike sun.*, which at least has a little community knowledge of being internal-only (and also just looks odd compared to standard classes), com.ibm.* has no such connotation, and there's nothing in Designer or the classes themselves to indicate that com.ibm.misc.BASE64Encoder (internal and non-portable) is any different from, say, com.ibm.commons.util.io.base64.Base64 (portable via IBM Commons).

So, over the years, use of that class has crept into XPages and Java agent code - it does the job and it's always been safe to assume that Domino-the-IBM-product would use J9-the-IBM-JVM. Domino isn't an IBM product anymore, however, and, to add to potential future trouble, even OpenJ9 builds from AdoptOpenJDK don't come with those com.ibm.misc.* classes.

The upshot is that, if your Java code were to run on a JVM other than an fully-IBM-style one (whether intentionally or otherwise), you're liable to run into trouble with code that casually used those JVM-internal classes.

The Options

If you're running a version of Domino using Java 8 (9.0.1FP8+), which you'd darn well better be at this point, you're in luck: the built-in java.util.Base64 class has a nice API and is guaranteed to be present on every JVM. This is your best bet, since it doesn't require any other dependencies beyond the core JDK and it has some nice features like variant encoders for MIME- and URL-safe use.

If you're targeting a version of Domino that still uses Java 6 (don't), you do have one other safe-enough option that's available in both XPages and Java agents: javax.xml.bind.DatatypeConverter. This class is technically part of the JAX-B specification but is included in JVMs from version 6 through version 10. This is less of a clean choice than the java.util.Base64 class both because it's not as nice of an API and because use on Java 11+ will require bringing in an external dependency. Still, if you're stuck on an old release, it'll at least get you through any potential rough patches in the near future.

Finally, in XPages, you have the aforementioned com.ibm.commons.util.io.base64.Base64 class, which will continue to be present due to being included in a base library of the XPages stack, but which is rendered obsolete by java.util.Base64.

So I recommend taking some time to look through any running Java code you have for this potential hiccup and making the switch. It's admittedly a little awkward to do, but it's still a good idea.