Java Hiccups

  • Nov 7, 2018

To take a break from the doom-and-gloom of my last post, I figured it'd be good to dust off a post idea I've had in my drafts for a while: common hiccups that Java developers - particularly those coming from a Domino background - run into. This is sort of a grab bag of non-obvious concepts that are easy to assume incorrectly about, whether because of the way other languages work or the behavior of the lotus.domino API specifically.

So, roughly in order of complexity:

import Is Just For Cleanliness

In many languages, in particular C/C++/Objective-C, the natural equivalent of Java's import statement has a massive effect, physically grafting files into your source. In Java, though, import is really just for developer convenience. At runtime, there's no difference between having written this:

import java.util.ArrayList;
import java.util.List;

/* snip */
List<String> foo = new ArrayList<>();

...or this:

java.util.List<java.lang.String> foo = new java.util.ArrayList<>();

If you import a class but never use it in a file, it won't have any effect on the runtime behavior of the class. It's just used by the compiler to clarify what you mean when you use a base class name without reference.

Incidentally, as seen here, classes within the java.lang package (but not subpackages) are auto-imported, so it's as if each Java file has an invisible import java.lang.* at the top.

Compilation Doesn't Bake In Libraries

This is related, and is also an area where Java differs from some other environments. With C et al, you have the option to statically link referenced external libraries - which is to say, grab their contents at build time and put them into your compiled result such that they may as well be part of your program. Java doesn't do this: every time you reference a class or method, it's really just storing the equivalent of the string name of that class, which is then resolved at runtime.

This is why it's very easy to run into a ClassNotFoundException: you can compile code with some classes present on the class path, but then run it in a system where they're not present. The Java runtime doesn't pre-check whether all the required classes are available when it starts running, so you only find out when it hits that line of code.

Different Java-based environments deal with this in different ways. Standard (non-Domino) web apps deal with this by including dependency jars inside the WEB-INF/lib folder during the packaging phase of building. OSGi, XPages's framework of choice, has a whole dependency mechanism where you can specify bundles or packages with version ranges, in the hope of bringing some order to the chaos, with mixed success.

Primitives Are A Thing

Though the term "primitive" means a built-in data type generally, here I'm specifically talking about things like int and double. For historical and performance reasons, Java has a conceptual and practical distinction between the objects that you deal with most of the time and the primitive types used mostly for number storage. Namely: byte, char, short, int, long, float, double, and boolean. Unlike object references, there is no concept of a "null" value with these that would cause a NullPointerException. Referring to a variable with one of these types will always contain some value if the code compiles, even if it's just the 0/false default for an object property when not otherwise initialized.

Each of these types has a corresponding "boxed" object version, generally with the un-abbreviated name capitalized, such as Byte and Integer.

The distinction used to be harsher than it is today, thanks to autoboxing. Autoboxing is a compile-time behavior that will automatically convert between the primitive types and their object holders as necessary, allowing this type of code, which would otherwise be illegal:

Object i = 3;
int j = new Integer(4);

Autoboxing is mildly inefficient, so it's good to know that it exists, but you don't normally need to lose sleep over it.

All Object Variables Are Pointers

In a language like C++, there is a distinction between a variable that "is" an object vs. one that is a pointer to an object somewhere in memory. In Java, however, the former doesn't exist: an object variable is only ever a pointer. This has a couple implications. For one, this code only deals one object:

SomeClass foo = new SomeClass();
SomeClass bar = foo;
bar.setName("hi");
foo.setName("hello");
bar.getName(); // Will be "hello"

This is also why Java is picky about not referencing object variables until they've been initialized to at least something, so this generates a compile-time error:

Object foo;
foo.getClass();

Unfortunately, unlike some languages, Java has no language-level support for enforcing the distinction between a null object reference and a non-null one, which is why NullPointerExceptions are so prevalent.

Another implication of this leads into its own hiccup common to LotusScript programmers:

Strictly Speaking, All Method Arguments Are "By Value", But...

All method parameters in Java are "by value" in the LotusScript sense, the fact that all object variables are pointers means that the "value" you're passing to the method for an object parameter is always a reference. Java has no mechanism to pass a reference to a primitive type, nor does it have a mechanism to implicitly duplicate an object when passing it to a method.

Not only is this a bit conceptually confusing at first, but it's also a potential trap for bad programming practices. It's very easy to write a method that performs modifications on objects passed in as parameters, and this is often the right thing to do. However, since the language doesn't have any syntax mechanism for broadcasting this behavior, it's up to you as the programmer to either write the method name in such a way that it's obvious what's going to happen or clearly state it in the documentation if it's something that's going to be used outside the current file.

Casting Objects Doesn't Do Anything

By "casting", I'm referring to something like this:

RichTextItem body = (RichTextItem)doc.getFirstItem("SomeItem");

The (RichTextItem) is a cast, and it's yet another area that diverges from some other languages. What casting an object in Java means is that you're going to refer to an object by a different class or interface than the one it's been previously referred to as. It has some runtime implications, but the thing to keep in mind is that it's about choosing the name for something that exists as opposed to changing an object into a different class.

So, for example, the RichTextItem idiom above exists because the Document#getFirstItem method returns an object that's called a Item, but, when the item in the document is rich text, it will actually return a RichTextItem object. RichTextItem is a subclass of Item, and so it's legal to refer to such an object either as RichTextItem, Item, Base (the common interface for all Notes objects), or Object (the common superclass of all objects). In a situation like this, you have to cast the object because you're going to refer to it as a more-specific type of object than the one the method says it returns.

If you do this and the object is not actually of the type you're trying to cast it to (in this case, if it's a plain text item or MIME, most commonly), you'll end up with a ClassCastException, because the cast is enforced at runtime. But, success or fail, the cast will not actually affect the object itself in any way - it will continue on being whatever it was already, regardless of name.

Casting Primitives Does Do Something

For better or for worse, performing a cast on a primitive type does have the possibility of creating a new value. For example:

int foo = Integer.MAX_VALUE;
short bar = (short)foo;

Because int can hold more data than a short, this case creates a new value based on chopping off the highest-value bits of the internal binary representation of foo. (As a side note, because of the fun way computers deal with numbers, foo is 2147483647, while bar is -1.)

Normally, this behavior doesn't matter too much, since, if you have a method that takes, say, an int and you have a long, you can safely cast it down since it'll likely be a tiny value anyway. It's important to know that it can happen, though, and this behavior is very important when, for example, native C libraries that use unsigned values, which do not exist in Java as such.

Java Has Only A Limited Concept of Immutability

"Immutability" refers to the inability to change the value of an entity once it's created. It's come to the fore as a concept recently because working with immutable objects sidesteps a lot of issues with asynchronous programming. Java, unfortunately, doesn't really have any language-level support for immutable objects in the sense that, for example, Swift does.

Java throws a bit of a curve ball in this area with the final keyword, which means that a variable can't be reassigned after being first initialized. This means that you can't do things like this:

final int foo = 3;
foo = 4; // compiler error

final SomeClass bar = new SomeClass();
bar = new SomeClass(); // compiler error

This, on the other hand, is entirely legal:

final SomeClass foo = new SomeClass();
foo.setName("hi");
foo.setName("hello");

This is because the only thing blocked from changing here is the value of foo-the-reference, but the object it's referencing can be changed at will.

An object can be made effectively immutable, though, by means of making its outward-facing methods not change any of the internal state. This is used commonly for "value" classes, such as the aforementioned Integer. Though the language doesn't do anything to guarantee that the Integer class doesn't allow mutation, the class is written in such a way that it has no inlet for it.

Because of the value of immutable objects, they're used commonly in the core Java classes and in third-party libraries, particularly newer ones. However, since the language can't tell you if an object is immutable, you have to be on the lookout for whether a given method modifies the existing object in-place or returns a new object reflecting the change. This comes up frequently with Strings, which are immutable in Java. This is something I've seen commonly:

String foo = " hello ";
foo.trim();
System.out.println(foo);

That code will print " hello ", with the leading and trailing spaces (though without the quotes). This is because the String#trim method, like all "changing" methods on String, leaves the original value intact but returns a new String object reflecting the expected value.

This is just something you have to be on the lookout for, especially since this pattern isn't even consistently applied within the core Java classes. The Date class, for example, is infamously bad in a lot of ways, and one of those ways is that it has mutation methods.

Generics In Java Are Weird

A "generic" refers in this case to a class that is declared as being associated with one or more other types that can be defined after the fact. The prototypical example of this is a collection class, like List<String> foo. In this case, the List interface is generic and lets you specify the type of object you expect to find within it, in this case String.

Generics, unfortunately, were added after Java's initial release, and they bear the marks of it. Unlike languages like C++, Java generics are largely syntactic sugar, meant to replace things like:

String someString = (String)aListIKnowHasStrings.get(0);

...with this:

String someString = aListDeclaredWithStrings.get(0);

However, under the covers, a List only ever really knows it contains Objects, and the second form just transparently shims in a (String) cast at runtime. That's why you can do something like this:

((List<Object>)(List<?>)aListDeclaredWithStrings).add(new NotAStringObject());

That line will not only compile, but it will execute without issue at runtime. It's only later, when you try to extract the value to a String variable, that you'll hit a ClassCastException.

Some generic information is retained at runtime, depending on how it's used, but for the most part it's best to think of it as just a syntax nicety. This behavior is endless trouble, but something we have to live with.

Garbage Collection Is Automatic, But Resource Management Isn't Necessarily

This is one of the main things that bites Domino developers as they learn about Java. One of the early things they learn when switching from LotusScript to Java is that now you have to worry about the .recycle() method on your objects, or else you'll have trouble. This leads to two misapprehensions: that Java in general requires "recycling" for every object, and that recycling with Domino objects is about memory in the same way that a Java OutOfMemoryError is.

Unfortunately, the reason that recycle() exists at all requires delving into some nitty-gritty aspects of the Java environment, but I first want to reinforce that Java uses automatic garbage collection at all times to watch for and delete objects that are no longer used. That "no longer used" bit glosses over a bit, but take this as an example:

public void foo() {
  String a = "hello";
  String b = " there";
  return a + b;
}
public void bar() {
  String message = foo();
  System.out.println(message);
}

There are three objects in action here, but, by the time the code reaches the System.out.println line, a and b are no longer used and will be slated for automatic garbage collection. You as a programmer do not need to worry about them.

The lotus.domino objects, though, are trouble. I think it's best to not think of recycle() in terms of "memory" but instead think of the objects as "open resources", in the same way that you might open a network connection or a stream to a file on the filesystem. Unlike object memory, network resources are not necessarily automatically closed by Java - there are some affordances with syntax and the concept of "finalizers", but, in general, the responsibility for closing a resource lies with the programmer.

There are a few grab-bag notes to do with these objects:

  • Different lotus.domino objects refer to different kind of backing resources, which is why problems will sometimes manifest as complaints about memory (when they refer primarily to C-side structures in Domino's native memory, separate from Java) and sometimes as complaints about handles (database and document references, generally)
  • Recycling a "parent" object recycles all of its children, but the relationship is not always clear. Importantly, DateTime objects are children of the ancestor Session, even when you retrieve them from a Document, and so they can linger for a long time
  • Agents and XPages both mitigate and conceal the need for recycling by automatically closing the auto-generated Session(s) at the end of the agent execution or page request. In practice, you only really need to worry about recycling if you're, for example, looping over a large view

I may make another one of these posts in the future, and hopefully this goes a little way to clearing up some common misconceptions.

How Do You Solve a Problem Like XPages?

  • Nov 2, 2018

(Fair warning: this is a meandering one and I'm basically a wet blanket the whole way through)

Last week, HCL held the third of their Twitter-based developer Q&As, with this one focusing on XPages and Designer. The majority of the questions (including, admittedly, all of mine) were along the lines of either "can we get some improvements in the Java/XPages stack?" or "is XPages still supported?". The answer to the latter from HCL, as it would have to be, is that XPages is still alive and "fully supported".

I don't doubt at all that XPages is supported in the sense that it has been for the last couple years: if you as a customer encounter a bug in the platform, support will take your call and will most likely either have a workaround or will get a fix in. This will no doubt happen naturally as time marches on, primarily when a new version of a browser breaks something in the version of Dojo that XPages uses (currently, I believe, a couple notches down the list as 1.9.7). So, that's good, and is better than the worst-case scenario.

It's not great, though. Version 10 had essentially no changes for XPages - that makes sense with its "stanch the bleeding" market goal, but it continues the history of very little progress. Outside of the forced-for-security-reasons bump to Java 8 in 9.0.1FP8 (which is admittedly nice), the last major addition to XPages was the Bluemix tooling via the Extension Library, which, as far as I can tell, only exists because of the way funding politics works inside IBM. Before that, I'd mark it as the promotion of Bootstrap renderers from Bootstrap4XPages to the main ExtLib just shy of four years ago. Before that, it was... I guess adding the ExtLib to the main product in 9.0, which sort of counts. Before that, it was pretty much the introduction of the extension points and ExtLib in the 8.5.2 era. And sessionAsSigner, I suppose.

Not to belabor the point too much further, XPages developers are in an uncomfortable spot. For a decade or so now, XPages was the clear "this is what Domino developers should be doing" choice, especially in the face of many Domino shops wanting to at the very least get rid of the Notes client. However, though it was modern enough when it was introduced, the stack has missed the boat on a lot of evolution, both in simple terms of its Java EE common ancestor improving on its own and in larger terms of changes in the web development world.

Had XPages continued under real active development, it could have gradually improved to fit more comfortably in a world of transpiled JavaScript and CSS pre-processors, strict focus on REST APIs, and reactive and streaming APIs.

But...

But even in that "active development" alternate universe, the path would have been awkward. Though XPages has the capacity to be well-structured, Designer and IBM provided no help on this: no model framework, no internal routing, not even an indication that writing Java code was possible in an XPages app until several versions in. The "MVC" aspects of its JSF components were beaten down to match the expectations of an NSF container and of LotusScript developers. There's very good reason for that - very few Domino shops were likely to send developers to computer-science boot camps to learn about proper Java EE structure, and the only way XPages was going to work at all was if it started out as "forms but with partial refresh".

And, after that first unstructured version, it largely fell prey to the usual problems of enterprise software: IBM isn't in the business of doing things their customers aren't asking for, and the community members asking for, say, a faces-config.xml editor weren't backing those requests up with big licensing checks. I get that, too, I really do. If you spend too much time hypothesizing about and implementing what customers might want, you run the risk of throwing your money down a bottomless hole while your actual customers suffer and leave. So IBM generally swings hard in the other direction, and it's understandable if unfortunate in cases like this, especially when what your customers are strictly asking for is stasis.

So What Now?

Our current situation now is that HCL plans to have a roadmap in Q1 2019, so I suppose we'll wait and see what they say again. I've been mulling over what I think should be the way forward for XPages and its users, and I don't see any clear good solution.

The option that immediately springs to mind is "add more features to it": HTTP/2 and WebSockets, newer renderers, an IDE that encourages good development practices, better way to bring in third-party libraries, cleaner JavaScript, and so forth. But what makes this questionable for me is the sheer amount of work, especially since they'd have to start by digging out of an immense amount of technical debt. And this would be very specialized work indeed - the XPages stack is complicated. I'd wager that just the part of the stack that handles ferrying attachments between the browser and a document is more complex than most entire Domino applications by a good margin. While some of the improvements would be handled via the core Domino server team (part of the HTTP upgrades, namely), HCL would likely have to acquire a team of Java developers and have them learn a giant stack of OSGi, JSF, Domino APIs, a couple decades of legacy decisions before even getting going. Possible? Certainly. It's just kind of a hard sell.

Another possibility would be open-sourcing the stack and either maintaining it as a project themselves, giving it to OpenNTF, or handing it to an organization like Eclipse, which now holds the reins of Java/Jakarta EE generally. I think that open-sourcing it would have immediate benefits to XPages developers regardless of what else they do with it, but I'm skeptical of how much of a life it would have if it was converted to a community-run project. As it stands, I can only think of a handful of people who a) are aware of XPages and b) would be capable of contributing to its core code. That's not a problem if you are employing people to work on it full-time, but I don't think it has a large enough base to exist on a "side project" basis. Attracting new blood would be an uphill battle: even projects like Andmore that have a clear purpose for existing and a contingent of people desperate to keep their workflow can wither on the vine immediately. Outside of Domino developers, XPages would be viewed as "JSF, except old and restricted to a platform you thought died in the 90s".

The other main thing to do with the stack that doesn't involve killing it, I think, would be pushing it to a state that focuses on REST APIs instead of handling the UI itself, which is something that some XPages devs have been doing already, either by switching to plugins serving up JAX-RS services or via in-NSF controls. This is something that IBM kind-of-sort-of said they were aiming for a couple years ago when they put forward SmartNSF as a good option, but the effective demise of the Extension Library cut off its path into the core, at least for now. Overall, I think that this would make sense. The experience of writing REST services inside an NSF (or in an OSGi plugin) is significantly worse than JAX-RS in a normal Java EE or Spring app, but it provides a clear path for existing NSFs and code to continue being used - more or less - without having to set up a second app server. It may not be the preferred solution for Java in Domino, but it would have the advantage of improving the platform without having to worry anymore about how, say, all the core renderers use tables for layout.

For Developers

For XPages developers, my long-proffered advice remains the same: learn things that aren't XPages. Whether that means diving head-first into Java EE or Spring, focusing on client-JS development, learning Node, or any other option, you'll likely be well-served by it.

It's possible that you could continue to do XPages work or go back to the Notes client indefinitely - XPages will get patches if nothing else, and Nomad breathes undeserved life into LotusScript - but there's no guarantee there. There's no guarantee anywhere else, I suppose, but staying too tied to Domino-specific technology keeps you at immediate risk of a from-above directive to switch away.

In short: do not expect a cavalry to ride to your aid.

AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10

  • Oct 19, 2018

Since 9.0.1 FP 10, and including V10 because it's largely identical for this purpose, I've encountered and seen others encountering a couple strange problems with compiling XPages projects. This is a perfect opporunity for me to spin a tale about the undergirding frameworks, but I'll start out with the immediate symptoms and their fixes.

The Symptoms

There are three broad categories of problems I've seen:

  • "AbstractCompiledPage cannot be resolved to a type"
  • Missing third-party XPages libraries, such as ODA, resulting in messages like "The import org.openntf cannot be resolved"
  • Complaints about MANIFEST.MF, like "MANIFEST.MF has no main section" and others

The first two are usually directly related and have the same fix, while the second can also be caused by some other sources, and the last one is entirely distinct.

Fix #1: The Target Platform

For the first two are based on problems in the active Target Platform, namely one or both of the standard platform components go missing. The upshot is that you want your Target Platform preferences to look something like this:

Working Target Platform

There should be a selected platform (the name doesn't matter, but "Running Platform" is the default name) with entries at least for ${eclipse_home} and for a directory inside your Notes data dir, here C:\Notes\Data\workspace\applications\eclipse. If they're missing, modify an existing platform or create a new one and add an "Installation"-type entry for ${eclipse_home} and a "Directory"-type one for the eclipse directory within your data dir.

Fix #2: Broken Plugins, Particularly ODA

Though V10 didn't change much when it comes to XPages, there are a few small differences. One in particular bit ODA: we had a dependency on the com.ibm.domino.commons plugin, which was in the standard Notes environment previously but is not as of V10 (though it's still present on the server). We fixed that one in the V10 release, and so you should update your ODA version if you hit this trouble. I don't think I've seen other plugins with this issue in the V10 transition, but it's a possibility if Fix #1 doesn't do it.

Fix #3: MANIFEST.MF

This one barely qualifies as a "fix", but it worked for me: if you see Designer complaining about MANIFEST.MF, you can usually beat it into submission by cleaning/rebuilding the project in question. The trouble is that Designer is, for some reason, skipping a step of the XPages compilation process, and cleaning usually kicks it into gear.

I've also seen others have success by deleting the error entry in the Errors view (which is actually a thing that you can do) and never seeing it again. I suspect that the real fix here is the same as above: during the next build, Designer creates the file properly and it goes away on its own.

The Causes

So what are the sources of these problems? The root reason is that Designer is a sprawling mountain of code, built on ancient frameworks and maintained by a diminished development team, but the immediate causes have to do with OSGi.

The first type of trouble - the target platform - most likely has to do with a change in the way Eclipse manages target platforms (look at the same prefs screen in 9.0.1 stock and you'll see it's quite different), and I suspect that there's a bug in the code that migrates between the two formats, possibly due to the dramatic age difference in the underlying Eclipse versions.

The second type of trouble - the MANIFEST.MF - is due to a behind-the-scenes switch in how Designer (and maybe the server) handles dependencies in XPages projects.

Target Platforms

The mechanism that OSGi projects - such as XPages applications - use for determining their dependencies at build time is the notion of a "Target Platform". The "target" refers to the notion that this is the platform that is expected to be available at runtime for what you're building - loosely equivalent to a basic Java classpath. An OSGi project is checked against this Target Platform to determine which classes are available based on their bundle names and versions.

This is distinct from the related concept of a "Running Platform". Designer, being based on Eclipse, is itself built on and runs using OSGi. Internally, it uses the same mechanisms that an XPages application does to determine what plugins it knows about and what services those plugins provide.

This distinction has historically been hidden from XPages developers due to the way the default Target Platform is set up, pointing at the same Running Platform it's using. So Designer itself has the core XPages plugins running, and it also exposes them to XPages applications as the Target. Similarly, the way we install XPages Libraries like ODA is to install them outright into the Designer Running Platform. This allows Designer to know about the library service provided, which it uses to populate the list of available plugins in the Xsp Properties editors.

However, as our trouble demonstrates, they're not inherently the same thing. In standalone OSGi development in Eclipse, it's often useful to have a Target Platform distinct from the Running Platform - such as the XPages environment for plugins - to ensure that you only depend on plugins that will be available at runtime. But when the two diverge in Designer, you end up with situations like this, where Designer-the-application knows about the XPages runtime and plugins and constructs an XPages project and translates XSP to Java using them, but then the compilation process with its empty Target Platform has no idea how to actually compile the generated code.

MANIFEST.MF

I've mentioned that an OSGi project "determines its dependencies" out of the Target Platform, but didn't mention the way it does that. The specific mechanism has changed over time (which is the source of our trouble), but the idea is that, in addition to the Java classes and resources, an OSGi bundle (or plugin) has a file that declares the names of the plugins it needs, including potentially a version range. So, for example, a plugin might say "I need org.apache.httpcomponents.httpclient at least version 4.5, but not 5.0 or higher". The compiler uses the Target Platform to find a matching plugin to compile the code, and the runtime environment (Domino in our case) does the same with its Running Platform when loading.

(Side note: you can also specify Java packages to include from any plugin instead of specific plugin names, but Designer does not do that, so it's not important for this purpose.)

(Other side note: this distinction comes, I believe, from Eclipse's switch from its own mechanism to OSGi in its 3.0 release, but I use "OSGi" to cover the general concept here.)

The old way to do this was in a file called "plugin.xml". If you look inside any XPages application in Package Explorer, you'll see this file and the contents will look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>
<plugin class="plugin.Activator"
  id="Galatea2dVCC_2fIKSG_dev5csyncagent_nsf" name="Domino Designer"
  provider="TODO" version="1.0.0">
  <requires>
    <!--AUTOGEN-START-BUILDER: Automatically generated by null. Do not modify.-->
    <import plugin="org.eclipse.ui"/>
    <import plugin="org.eclipse.core.runtime"/>
    <import optional="true" plugin="com.ibm.commons"/>
    <import optional="true" plugin="com.ibm.commons.xml"/>
    <import optional="true" plugin="com.ibm.commons.vfs"/>
    <import optional="true" plugin="com.ibm.jscript"/>
    <import optional="true" plugin="com.ibm.designer.runtime.directory"/>
    <import optional="true" plugin="com.ibm.designer.runtime"/>
    <import optional="true" plugin="com.ibm.xsp.core"/>
    <import optional="true" plugin="com.ibm.xsp.extsn"/>
    <import optional="true" plugin="com.ibm.xsp.designer"/>
    <import optional="true" plugin="com.ibm.xsp.domino"/>
    <import optional="true" plugin="com.ibm.notes.java.api"/>
    <import optional="true" plugin="com.ibm.xsp.rcp"/>
    <import optional="true" plugin="org.openntf.domino.xsp"/>
    <!--AUTOGEN-END-BUILDER: End of automatically generated section-->
  </requires>
</plugin>

You can see it here declaring a name for the pseudo-plugin that "is" the XPages application (oddly, "Domino Designer"), a couple other metadata bits, and, most importantly, the list of required plugins. This is the list that Designer historically (and maybe still; it's not clear) uses to populate the "Plug-in Dependencies" section in the Package Explorer view. It trawls through the Target Platform, finds a matching version of each of the named plugins (the latest version, since these have no specified ranges), adds it to the list, and recursively does the same for any re-exported dependencies of those plugins. "Re-exported" isn't exposed here as a concept, but it is a distinction in normal OSGi plugins.

Designer derives its starting points here from implicit required libraries in XPages applications (all those "org.eclipse" and "com.ibm" ones above) as well as through the special mechanism of XspLibrary extension contributions from plugins installed in the Running Platform. This is why a plugin like ODA has to be installed in Designer itself: in the runtime, it asks its plugins if they have any XspLibrary classes and uses those to determine the third-party plugin to load. Here, ODA declares that its library needs org.openntf.domino.xsp, so Designer adds that and its re-exported dependencies to the Plug-in Dependencies group.

With its switch to OSGi in the 3.x series circa 2005, most of the functionality of plugin.xml moved to a file called "META-INF/MANIFEST.MF". This starkly-named file is a standard part of Java, and OSGi extends it to include bundle/plugin metadata and dependency declarations. As of 9.0.1 FP10, Designer also generates one of these (or is supposed to) when assembling the XPages project. For the same project, it looks like this:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Domino Designer
Bundle-SymbolicName: Galatea2dVCC_2fIKSG_dev5csyncagent_nsf;singleton:=true
Bundle-Version: 1.0.0
Bundle-Vendor: TODO
Require-Bundle: org.eclipse.ui,
  org.eclipse.core.runtime,
  com.ibm.commons,
  com.ibm.commons.xml,
  com.ibm.commons.vfs,
  com.ibm.jscript,
  com.ibm.designer.runtime.directory,
  com.ibm.designer.runtime,
  com.ibm.xsp.core,
  com.ibm.xsp.extsn,
  com.ibm.xsp.designer,
  com.ibm.xsp.domino,
  com.ibm.notes.java.api,
  com.ibm.xsp.rcp,
  org.openntf.domino.xsp
Eclipse-LazyStart: false

You can see much of the same information (though oddly not the Activator class) here, switched to the new format. This matches what you'll work with in normal OSGi plugins. For Eclipse/Equinox-targeted plugins, like XPages libraries, plugin.xml still exists, but it's reduced to just declaring extension points and contributions, and no longer includes dependency or name information.

Eclipse had moved to full OSGi by the time of Designer's pre-FP10 basis (2008's 3.4 Ganymede), but XPages's history goes back further, so I guess that the old-style Eclipse plugin.xml route is a relic of that. For a good while, Eclipse worked with the older-style plugins without batting an eye. FP10 brought a move to 2016's Eclipse 4.6 Neon, though, and I'm guessing that Eclipse dropped the backwards compatibility somewhere in the intervening eight years, so the XPages build process had to be adapted to generate both the older plugin.xml files for backwards compatibility as well as the newer MANIFEST.MF files.

I can't tell what the cause is, but, sometimes, Designer fails to populate the contents of this file. It might have something to do with the order of the builders in the internal Eclipse project file or some inner exception that manifests as an incomplete build. Regardless, doing a project clean and usually jogs Designer into doing its job.

Conclusion

The mix of layering a virtual Eclipse project over an NSF, the intricacies of OSGi, and IBM's general desire to insulate XPages developers from the black magic behind the scenes leads to any number of opportunities for bugs like these to crop up. Honestly, it's impressive that the whole things holds together as well as it does. Even though it doesn't seem like it to look at the user-visible changes, the framework changes in FP10 were massive, and it's not at all surprising that things like this would crop up. It's just a little unfortunate that the fixes are in no way obvious unless you've been stewing in this stuff for years.

Domino 10 for Developers

  • Oct 9, 2018

So Domino 10 is upon us, marking the first time in a good while that Domino has had an honest-to-goodness version bump.

More than anything, I think V10 is about that sort of mark. Its primary role in the world is to state "Domino isn't dead" - not exactly coming from a position of strength for the platform, but it's the critical message that HCL has to sell if they're going to be viewed as anything but coroners.

Still, in addition to merely existing, V10 brings some changes that will help developers, particularly those - sadly - maintaining large legacy applications.

DQL

The addition that will have the largest immediate impact on developing codebases is, I think, DQL. I went into a bit of detail on this before and I think that that post is largely accurate, but the general gist of it is that DQL can be thought of as "database.search(...) but good", bringing practical arbitrary queries of non-FT data to Domino.

In its current form, it feels like a long-back-burnered passion project that's implemented in an effective way, bringing some of the benefits of arbitrary queries in SQL and new-era NoSQL databases without having to rewrite NIF or NSF storage.

UpdateAs Karsten Lehmann kindly pointed out, DQL is slated for addition to the LS/Java classes in 10.0.1 at an as-yet-unspecified time.

HTTP Methods in LotusScript

I just let out a heavy sigh after writing that header, but I get why they added these. A lot of Domino developers never left the desiccated-but-comforting embrace of LotusScript or are employed primarily to maintain Notes client apps, where using Java is possible but involves jumping over hurdles.

Network operations have been possible for a long time via OLE (on Windows) or LS2J, and I'm sure I'm not the only one who's had a "Network" LS script library sitting around for over a decade, but having baked-in methods is preferable. Moreover, neither of those mechanisms would work on iOS without a lot of additional work.

This is being billed as enabling all sorts of integrations, which I suppose is strictly true in that it's a bit easier to call HTTP methods in old code now. In practice, I think it will be mostly helpful for the little one-off situations where you have to call some web service to integrate with a product tracking app or the like.

iPad Notes Client

This definitely seems like another back-burner project that was brought to the fore in the HCL transition. They had iOS references in the Mac 64-bit C SDK years ago, and it only makes sense, since the existence of the Mac port at all meant the job was (sort of) half done. It's not out today, but it's logically tied to V10, and they've been expanding a beta over the summer.

Like the additions to LotusScript, it makes sense. I can't imagine that running existing Notes apps on an iPad will be a good experience, but it should be a cheap one, and it'll probably be good enough for at least some cases. They've intimated that there will be affordances in app design to improve the experience specifically for this client, though I don't envy the engineers who have to go in and implement those.

Node.js Support

Dubbed the "App Dev Pack", Node support will be coming in an Upgrade-Pack-like additional download, in the form of a Domino server addon to add a gRPC server combined with a domino-db Node module that I gather is designed to be familiar for Node+MongoDB stack users.

When this intention was first announced, I think that a lot of Domino developers figured it would be like XPages: a new design element or two added to the NSF, plus another runtime crammed into Domino's aging HTTP stack. The other big potential option was essentially a codification of the ExtLib DAS REST services into a wrapper package to be used in standalone Node apps.

The App Dev Pack is more the latter than the former, but the use of gRPC should make it more performant and flexible than just wrapping the existing HTTP services. I'll be curious to see how this shakes out in practice. XPages has been with us for a decade, but it still only captured a slice of the Domino development market, and it carried the advantage of being bundled right into the stack. Node is a very different beast, entirely unlike traditional Domino development, and I'm not sure how many existing Domino developers will make the transition. Ostensibly, one of the main benefits is to also attract new blood, which - well, maybe.

Having this is much better than not, and the notion of having a new RPC connection that doesn't have the local runtime requirements of NRPC is tantalizing.

Overall

Overall, this release definitely feels like a very pragmatic release. Just by virtue of its existence, it covers the base of "Domino isn't dead" in a way that's much better than the older mealy-mouthed messaging of "well, we don't have specific plans to cancel it". Additionally, though, the fact that most of the developer-facing improvements are for "old world" design elements is an acknowledgement that XPages didn't capture the Domino development world (and, probably, that HCL didn't hire the XPages team). The prospect of the community crawling back into the LotusScript cradle isn't great, but there's no avoiding the fact that there are a great many developers who never had a reason to do anything different. Not many cost-cutting IT departments let their developers re-learn their entire skillset when other departments are just asking for a new button on a form.

In an alternate universe, this would have made for a fine "Domino 9.5" release, but the wringer that the 9.0.1 era put us through demanded a full major version bump. I'll be curious to see how Domino 11 and so on shape up. If the "not dead" push works and it turns Domino's fortunes in the market around at all, it would give HCL room to turn it into a real platform again. That's a big "if", since it's a lot easier to get existing Domino developers excited than it is to get IT purchasers to sign the licensing checks, but time will tell.

SNTT: Designer Target Platform

  • Sep 27, 2018

When working on an XPages project, my development environment is generally set up like so:

  1. Eclipse running in the Mac environment editing Maven-structured plugins and ODPs
  2. Domino running in a local Windows VM, set to run plugins out of the Mac Eclipse workspace
  3. Designer running in the same Windows VM to compile the ODP and work with legacy elements
  4. Firefox DE running in the Mac environment

This has proven to be a fairly comfortable setup, particularly ever since Serdar added the ability to use a remote workspace to the XPages SDK. When I make a change in the plugin, I only need to restart HTTP on Domino and all is well.

But there's been one major annoyance: if I change a method or class that I use in in-NSF Java, I have to install the updated plugins and restart Designer, which is a big distruption to my flow. Even if I was using Eclipse on Windows with the old method of running Designer out of Eclipse (if that even still works), it'd still involve a restart.

Today, though, I realized that I've been doing it the dumb way all along. The reason is that I've been conflating the two aspects of an XPages Library plugin when it comes to Designer.

The first aspect is to tell Designer-the-IDE that an XPages Library named, say, org.example.XspLibrary is available for use in XPages applications, and that it's associated with the plugin org.example.plugin. This is provided by the plugin being installed into the running Designer environment, and absolutely needs a restart when it changes. Designer uses this information to compose the OSGi pseudo-project that makes up the NSF in the workspace (by adding it to the plugin.xml historically and the MANIFEST.MF in 9.0.1 FP10+).

The second aspect is that the plugin is then available in the Target Platform. The Target Platform is an OSGi-ism basically meaning the OSGi runtime's view of the world - in Eclipse, it means the plugins that Eclipse knows about and which can be referenced by projects, such as the NSF. This doesn't have to be related at all to the running platform - and, in fact, it's common in OSGi development to have an entirely-distinct target platform to properly represent a server or other runtime environment.

The reason that these aspects are comingled is that, by default, Designer is configured to have a default Target Platform that's based on its runtime environment and any installed plugins. We, as XPages developers, thus generally don't have to think too much about it. "Install the plugin in Designer" is the single step that handles registering the library and making its classes available.

However, the platform is just a setting in the preferences, and it usually looks something like this:

Default Designer Target Platform

Since it's just a setting, and one that is commonly modified in Eclipse, there's nothing stopping us from modifying it. That's where I realized I was doing it the inefficient way all along. So I modified the active platform to point to the target/site folder of my update site Maven project:

Modified Designer Target Platform

With that change, Designer will see both the installed version as well as the latest results of the Maven build. So now, I can do a Maven build and then, in this dialog, click "Reload..." to get Designer to notice the changes. Once I do, voilà - the new methods/classes show up and I don't have to restart and lose my workspace state.

App Dev After CollabSphere 2018

  • Jul 29, 2018

In recent years, MWLUG/CollabSphere has tended to be a good time to get a lay of the land for what IBM - and now HCL - intends for their app dev strategy. Recent Connects weren’t too heavy on announcements of major import for Domino developers, and any news to come out tends to do so in the months leading up to summer.

This year, we’ve had time to digest the implications of the HCL transfer, get a feel for how they intend to handle the product, and generally get a good bead on their app-dev vision. What they’ve said so far this year is clear: LotusScript for old apps on mobile platforms and Node.js for new development (or new developers). As far as XPages, I believe that the most time that it got at the conference was in my session, which was about what to do after XPages.

LotusScript

Though I’ve certainly not hidden how painful the prospect of enhancements to LotusScript is to me, I have to admit that adding a few capabilities for REST data service access makes strategic sense for the platform. Though XPages made a significant mark on Domino app dev, it never pushed aside the classic style, and every move that IBM made for app modernization since then seemed to exist exclusively in the span of the sentence announcing it.

So HCL announced early this year that they planned to port the classic Notes client first to iOS and then later to Android and WebGL+WebAssembly. Adding any kind of Java to this plan - XPages, LS2J, etc. - would present some technical hurdles, and so it makes workload sense to focus on the languages that have runtimes in the C core.

Apps run this way won’t be good, but there’s some logic to the tack of targeting customers for whom “modernization” only really means “we want our same old apps to run offline on new OSes”. Their plan to run on phones also necessitates some more-dramatic changes to the tooling, so it’s possible that they have larger changes in mind - or at least we’ll see a return of the “hide on mobile” checkboxes in Designer.

Node.js

The big HCL push for Node.js seems to me to be a way to get a lot of bang for the buck: by positioning it as the new way to write apps, they’re both (potentially) making Domino more appealing to those not already on the platform and guiding existing developers to a platform for which IBM and HCL are not responsible. Though the domino-db driver is no small technical feat - and it looks like they’ve done a good job making it both fast and native-feeling in Node - it’s a much, much smaller footprint than XPages, which put IBM on the hook for maintaining an entire app-dev stack and UI toolkit with limited outside assistance.

I do think that it’s smart to write a Node.js DB driver - even if it doesn’t bring in an influx of new blood, it provides a legitimate app-dev story and Node is a top-notch platform. The gRPC stack also provides an entryway for future hooks and development without the assumptions of NRPC.

Java

Java development on Domino is in a weird place. Domino 10 doesn’t have anything directly for XPages/OSGi developers, though we’ll get access to DGQF via the Database class. I’ve heard whispers that they’re starting to plan more for Domino 11, but that’s largely conjecture at this point. Certainly, HCL has made it clear that their heart isn’t in it, and honestly I get why. Since XPages has been in essentially maintenance mode since 9.0.1 or earlier, it’s aged itself out of contention for modern app dev. It wouldn’t be impossible to drag it forward to something respectable, but then they’d still have another development environment exclusive to Domino to maintain.

I’m not sure what the best thing to do with the stack is. Though XPages didn’t bring all Domino developers to it, it did bring a significant chunk, and a lot of people have spent upwards of a decade of their life with the toolkit. For my part, I think it makes a lot of sense to move to “normal” Java/Jakarta EE development, which provides the possibility of salvaging Java-side code, though it leaves XSP and SSJS in the lurch. It’s hard to make a good financial case for either significantly upgrading the platform or at least undoing the tight coupling with the Domino server that it accrued over the years, though I’ll admit it’s sort of fun to think about.

DGQF and DQL as I Understand Them

  • Jul 26, 2018

At CollabSphere this year, the big information coming from HCL was detail about the Domino General Query Facility (DGQF) and its associated language, Domino Query Language (DQL). They originally announced this a few weeks ago, but it was good to have had some time to let the dust settle and to see the specifics.

Because it was discussed alongside the domino-db Node.js package and because it's one of the first real new ways we'll interact with data in a Domino DB in a while, it's a bit difficult to identify just what it is and what it is not. Here's how I understood it:

What DGQF Is

DGQF is, at least conceptually, a "meta" layer on top of the existing NIF indexing facility. It doesn't provide a core change to the actual storage of documents, but instead treats existing view indexes as (roughly) analagous to both SQL table indexes and SQL views. It trawls through the design elements of a database to analyze their selection formulae and columns to use applicable ones as implicit indexes and also to allow access to arbtirary collections within queries.

Implicit Indexes

Other than the design collection and the "optimize document table" option in a DB, an NSF doesn't really have much in the way of indexing note contents by default. So, if you have a query asking for all documents where FirstName is Bob, a program has no choice but to look through every document for that key/value match. If, however, you create a view that has a column showing the FirstName field, you now have a much-faster index you can use. It's this sort of view that the DGQF picks up on implicitly, using them to accelerate queries: views showing all documents with either a default sort or "click to sort" column showing explicitly a field (and not a formula).

Access to Arbitrary Collection Data

For those qualifying views plus others, you can reference a view by name or alias to compare to a column value by programmatic name (often either the field name for simple columns or something like $4 by default for formulas).

"In" clauses

Additionally, you can use view (and folder, I think) names to refine queries for documents that are in one or more of these collections, equivalent to an "in" subquery or view reference in SQL

What DQL Is

In short, DQL is the human-readable query language used to access DGQF. It's reasonably SQL-like (though it is not SQL) and tends to look like FirstName='Bob' and in all ('Managers', 'Active Users'). This is the language you will use, and so "DGQF" and "DQL" will generally refer to the same thing in practice.

In practice, this is implemented as a new method on the Database class in each high-level language supported by Domino, plus a Node-styled variant in domino-db.

What DGQF and DQL Are Not

Since DGQF sits on top of NIF (and probably the FT index eventually), it's not a core change to data storage. Eventually, the same abilities and limits of Domino remain as they are with respect to this.

Additionally, DQL is, I believe, a query language only: it does not provide a mechanism for creating, modifying, or deleting existing documents. Instead, it is essentially a super-powered and much-smarter version of database.search(…): you can use it to find documents and the processing of them is up to your program.

That last point was a bit muddied by its pairing with the domino-db Node.js package: the Node.js package provides bulk operations that are paired with DQL queries, but that is a function of that library specifically, not DQL or DGQF.

Why It's Cool

Though it's not a reworking of the core NSF, what DGQF does do is abstract away a lot of the manual looping and lookups that we've always had to do, and it allows the system to optimize and do things more efficiently than when written out procedurally. So, while there's theoretically nothing that DGQF does that we couldn't do before, it allows us to do those things with far, far less code and with automatic optimization.

This brings Domino something that SQL servers have enjoyed for a long time. With a SQL statement, you can analyze the trouble spots of a slow-running query and add indexes to improve the speed, with the tooling helping to explain what's going on. DGQF+DQL brings this along for the ride: when you execute a DQL query, you have the option to dump out this "explain" output to see what specifically the facility did, which views it used, and how long each step took. So, if you have a long-running query, you can look to see if you can add an "index" view to automatically speed it up without having to change your code. And, since the language is an abstraction over the task of querying and not the sort of "burned in" process of a normal getNextDocument loop, it can be optimized and short-circuited by the underlying system without the developer having to know the decades of built-up knowledge of how to efficiently search a DB.

All in all, this is a very welcome addition to the server, and it certainly should improve a lot of common tasks.

Reforming the Blog in Darwino, Part 4

  • Jul 20, 2018

Last time, I went over my switch in tack for how I'm making the new version of my blog, and my overall focus on picking an interesting stack of JEE technologies. In this post, I'm going to start diving into the implementation of the UI, though I think that it will make sense to dedicate two posts to it.

The biggest decision I made with the UI side of this app is that I didn't want to make a client-side JS app. There's a reason they're so ascendant, and I find development with React or Stencil pretty enjoyable, but I wanted to go a different route here for a few reasons:

  • For a blog, a CSJS app is wildly overkill, and, in fact, would require extra work to fulfull one of the basic requirements of a blog, which is being web-crawler friendly.
  • I want to see how svelte I can make the client payload.
  • Skipping a JS framework (and a CSS one) is a great way to brush up on what plain HTML and CSS are capable of nowadays.
  • Unlike a typical Darwino app, my only target is a full-on Java web server, so I'm not held back on the Java side by the capabilities, say, of Dalvik on Android 4.
  • Part of me misses the simplicity of my early PHP days, albeit not the language.

The Java Side

I decided to go with the MVC 1.0 draft spec because it lets me write extremely focused code. Here is the controller for the home page:

package controller;

import javax.inject.Inject;
import javax.mvc.Models;
import javax.mvc.annotation.Controller;
import javax.ws.rs.GET;
import javax.ws.rs.Path;

import model.PostRepository;

@Path("/")
@Controller
public class HomeController {
	@Inject
	Models models;
	
	@Inject
	PostRepository posts;
	
	@GET
	public String get() {
		models.put("posts", posts.homeList());
		
		return "home.jsp";
	}
}

Naturally, there's a lot of magic going on behind the scenes - there's tons of heavy lifting going on here by JAX-RS, MVC, CDI, JNoSQL, and Darwino - but that's the point. All the other components are off doing their jobs in their areas, while the code that provides the UI doesn't have to care about the specifics.

Things can get more complicated on the pages that actually have some functionality to them, but the code remains pleasantly focused. Take the handler for deleting posts:

@DELETE
@Path("{postId}")
@RolesAllowed("admin")
public String delete(@PathParam("postId") String postId) {
	Post post = posts.findByPostId(postId).orElseThrow(() -> new IllegalArgumentException("Unable to find post matching ID " + postId));
	posts.deleteById(post.getId());
	return "redirect:posts";
}

This adds another level of magic in the form of javax.security.annotation.RolesAllowed, but it's more of the good kind: even with no knowledge of the underlying frameworks, it's pretty clear what every bit of code is doing here. It's a refreshing bit of that Rails simplicity, just more compile-type-safe and much uglier.

Even beyond the minimal code is the cleanliness that this brings to the structure of the application: other than the img, css, and js paths, all of the routing within the application is done care of JAX-RS and MVC. It's not beholden to the folder structure in the project or to a Domino-style implicit app router.

JSP

JSP has been the prototypical Java HTML language for about 20 years, and it's had a rough upbringing. The early versions committed the PHP/XPages sin of encouraging you to put business logic right on the page, and it even still has the typical Java problem that it's tricky to find advice about using it that uses technologies added since 2005.

Still, when used properly, it can be a nice, clean templating language. Again from the main home page:

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<t:layout>
	<c:forEach items="${posts}" var="post">
		<t:post value="${post}"/>
	</c:forEach>
</t:layout>

For an XPages developer, this is extremely comfortable. It's also very refreshingly elemental: there's no server-side persistence of the page, so everything is "load-time bound" and, with just HTML tags and core JSTL tags, nothing ends up on the page that you don't explicitly put there.

Ozark, the MVC implementation, also supports using JSF "Facelets" for the view portion, but JSP suits the task just fine.

HTML + CSS

It'd been far too long since I last really sat down and looked at what baseline HTML and CSS are like - in particular, I'd watched the rise of CSS Flexbox and Grid from afar, and never gave them a shot. Using components that generate their own HTML and pre-existing CSS frameworks to target with class names is all well and good, but it does leave you a bit disconnected from the fundamentals.

So, for this iteration, I tossed aside the very-nice Bootstrap framework I've been using, dusted off one of my old hand-built ones, and got to translating it into CSS Grid. This cut down on the page size enormously: I had already echewed Dojo by not using XPages, but this now also meant that I could ditch the core bootstrap.css, jQuery, and any jQuery plugins.

Beyond CSS Grid, have you seen how nice HTML forms are nowadays? Just looking at this post reveals how much is built in in the way of validation and different input types, even before you write a line of JavaScript.

Turbolinks

Having such a trimmed-down UI means that pages already load extremely quickly, but I figured this was also a perfect chance to try out a bit of clever tech from the team at Basecamp: Turbolinks. Turbolinks is a JS file that you bring into your app which then transparently takes over your in-app links to minimize the amount of rendering you have to do. Since the surrounding boilerplate of the app usually doesn't change between requests, it can figure out the "diff" between old and new and just replace the body. It's essentially like partial refreshes without the server knowing anything about it.

It's still technically inefficient to have the server render and transfer surrounding page elements that are just going to be discarded anyway. But, on the other hand, skipping that means that I don't have to write JavaScript handlers myself, use a full CSJS app framework, or keep state on the server side. The server just keeps doing what it does with a fully context-less request and the browser sorts it out. Basecamp's programmers are masters at the targeted deployment of kludges for maximum benefit.


In the next (final?) post in the series, I'll finish up with my "low-JS" experience and other lessons learned from this project.

Reforming the Blog in Darwino, Part 3

  • Jul 18, 2018

A good while back, I created a project structure for reforming my blog here in Darwino, but, as happens with low-priority side projects, it withered on the vine, untouched since then. Beyond just the "cobbler's children" aspect to it, I also lost steam due to a couple technology paths I initially headed down.

The first was basing the UI on Angular, which I've never really enjoyed working with. I'm sure I could have ended up with a decent result with it, but Angular always rubbed me the wrong way. And not just Angular: for a dead-simple UI like this, a full JS UI is just weird overkill.

The second was off in the other direction: I initially tried cramming a Rails app in the tree, which could be made to work, but it introduced so many weird edge cases outside of the problem at hand. That alone isn't the end of the world, but not much of what I'd have to solve to make that work would be transferrable elsewhere any time soon, so it'd end up a real time sink.

So, taking what I've learned since and the projects that I've been working on, I've decided to take another swing at it. Before I get into the implementation side, it will be useful to go over the technologies I did choose for the new form.

Java/Jakarta EE

I've recently become kind of enamored with the modern form of the Jakarta EE stack, and so I decided to use this as an opportunity to really dive in to what a blue-ocean small by-the-books Java app looks like nowadays.

JEE got a well-deserved bad rap over the years for its configuration complexity and general impenetrable-ness, but I've been very pleased to find that those tides have largely receded. It's all still there if you want it, but a fresh new app primarily consists of decorating a handful of Java classes with declarative annotations.

JEE consists of a series of individual specs, and building an app involves choosing which ones you want to use, plus (depending on which you choose) picking your app server target.

Tomcat

I originally gave a shot to adding enough OSGi metadata and bundles to target Domino, but decided quickly that it was just not worth it. The HTTP/servlet stack in Domino is just so old that, even if I got everything bound together, I'd still be fighting the platform every step of the way.

The better route was to put it aside and just run a modern Java app server. I went down the list of GlassFish, Payara, WebSphere Liberty (the nearest miss), TomEE, and WildFly, but each one ended up having some problem with either the dependencies I wanted or with their Eclipse integration. I ended up settling on good ol' reliable Tomcat. Tomcat itself isn't actually a JEE server, but it's kind of like a Raspberry Pi: it gives you the baseline for a Java servlet engine, and then you can cobble together your own EE stack on top of it by explicitly bringing in implementations. Though the final .war file is far less svelte this way, I found that this build-your-own method results in the lowest chance of being held back by the platform currently.

As an aside, Sven Hasselbach has been writing a very interesting series on running Jetty on top of the Domino JVM to achieve a similar end, albeit with Spring.

Darwino

For all the same reasons as when I set out on this journey originally, I'm using Darwino for the baseline. This lets me replicate in my existing blog data smoothly while getting the advantages of a superior backing database. I'm not making use of mobile clients or most Darwino services with this, but the baseline is nonetheless a step up, and fits in with a JEE app like a glove.

JNoSQL

I brought in the JNoSQL Darwino driver I wrote a little while ago to handle the model layer. JNoSQL is essentially JPA but reformed for NoSQL access - no cruft, no relational/NoSQL impedence mismatch, and designed to fit with current JEE technologies.

CDI

CDI is one such technology, and it's a very interesting one to work with. The whole "dependency injection" realm is a little fraught and, if my Eclipse UI error reporter is any indication, prone to bizarre errors, but the core concept is good and very useful. I've gotten it into the swing of using it both as the "managed bean" provider for the front end as well as the general service provider glue for the app. It still takes some getting used to, and the learning curve falls prey to a similar problem as when I was learning Maven: something about learning how it works makes you forget what it was like to not know, and so a lot of the answers online assume way more knowledge than a neophyte has.

Bean Validation

I've long been a fan of the Java bean validation API, and it's a clean fit here too: JNoSQL picks up on the presence of Hibernate Validator without configuration beyond the dependency and it just works. No muss, no fuss.

JAX-RS + MVC Spec

JAX-RS is at this point familiar territory for a lot of Domino developers, but I decided to use it as the underpinnings of the whole UI, in tandem with a draft framework called MVC 1.0. The latter's generic name doesn't really give much detail, but it's essentially a spec that enhances JAX-RS entities with knowledge of HTML templating frameworks, allowing you to write a very clear app structure. It's not a server-state-based framework like JSF, but rather a bit "closer to the metal", where you deal directly with the HTTP method cycle.

As I'll go more into in the "UI" post, it's been surprisingly refreshing to get back to basics in this way - JSF/XPages is often a bit conceptually easier to work with (at first) and client-side JS frameworks have some REST+JSON purity to them, but just "this server-rendered HTML page with no server state is everything you need" feels really good sometimes.

Admittedly, the MVC spec itself is in a weird place. It was originally a candidate for inclusion in Java EE 8, but was dropped in the final runup. It's possible that this will prove to be a kiss of death, but the spec is so small but functional that I don't feel bad about taking the risk of building an app on it.


That about covers the technology stack. When I get around to writing the next post, I'll go into some of the specifics about how I decided to set up the UI, which has been a fun experiment of its own. In the mean time, the active repository is up at:

https://github.com/jesse-gallagher/frostillic.us-Blog/tree/develop/frostillicus-blog

A (Java-Centric) Domino Wish List

  • Jul 12, 2018

Seeing the information come out of this week's HCL "Golden Ticket" event has got me thinking about some of my wish-list items for Domino development, mostly in the form of enhancements for existing capabilities and entirely around Java (since that's what I do).

Quality of Life

Javadoc

For some reason, the lotus.domino classes ship without Javadoc or even variable-name information, leading to this trainwreck:

Designer has its built-in help, which is also on the web, but that's quite a few steps down. This is table stakes for a Java API and always has been.

Updated p2 Repository

Back in 2014, the XPages team uploaded a clean p2 repository of the XPages artifacts to OpenNTF, corresponding with the 9.0.1 release. This repository saves a ton of hassle when building Tycho-based projects or just setting up an Eclipse workspace. However, it's quite long the tooth, as there have been several Notes.jar additions not included in there, and, in FP10, a significant upgrade to the undergirding OSGi framework.

I ended up writing a script to generated an updated version, but I don't have the legal ability to publish the results anywhere for easy consumption, meaning it has to be done manually and configured for each build environment. It would be a great convenience if there was an official package (ideally including the Designer plugins as well) and, even better, hosted on OpenNTF so that we could reference it by URL as we do for Eclipse releases (and require users to accept a license first).

Mavenized Repository

The p2 repository is good for Tycho-based projects, but, especially when targetting Domino is only one part of a project, it can be much more convenient to use "normal" Maven projects with maven-bundle-plugin. However, those projects can't use p2 repositories as such. For Darwino's needs, I ended up writing a tool in the (available-for-free) Darwino Studio plugins to Maven-ize a Domino p2 repository, but that hits the same snag as above of requiring manual setup in each instance.

This is another case where my preference would be on the OpenNTF Maven repository (plus Javadoc Jars, naturally).

Extension Library Source

The latest Extension Library release on OpenNTF is from FP9, while the latest on GitHub is from the FP7 era. FP10 shipped with a newer version of indeterminate nature. It'd be good to have this on both of those sites and, like with Javadoc, have source bundles shipped with the product in a way that is picked up automatically by Designer and Eclipse.

Source Bundles for Third-Party Components

The source for the undergirding Equinox stack is available, but it would be best to have, as an adjunct to the updated p2 repositories, the source bundles for the actual versions used so that we don't have to cobble together a platform from Eclipse's repositories.

Open-Source the Rest of the Stack

Having XPages, the Expeditor husk, and the other miscellaneous doodads that make up the proprietary layer as open source with an Apache-compatible license would cover a lot of the above and also be of tremendous use for XPages and non-XPages apps alike that run on or with Domino. I have a hard time imagining that it would lead to a lot of community-driven improvement, but it may do some (I'd have a few words to share with the file-download control, for example), and even just as a static release would be a significant boon.

Domino Connectivity in Eclipse

An idea I've been toying with lately is to make an Eclipse plugin that allows you to add Domino servers to the "Servers" view and control them to some extent. The basics would be to start/stop/restart HTTP, but the stretch goals would be to open a console view, get a list of running modules, integrate with the existing "load bundles from PDE" support, and, ideally, an outright "Run on Server" command for OSGi bundles and NSFs. However, I have so much on my plate that I'm not sure that I'll get to this any time soon unless I get a real itch some weekend.

Longevity

Refreshed JVMs

Feature Pack 8 brought Java 8, a vital step forward. However, since then, Oracle moved to a faster release cycle for Java and the JRE is now at version 10. Domino uses IBM's JVM variant, J9, which they recently moved to the Eclipse foundation as OpenJ9, where it has... sort of been keeping up, I think?

In any event, this increased pace of change has meant that the Java 8 honeymoon is over, and Domino development again requires special consideration when using current tools. I have no idea how complex the integration between Domino's tasks and the underlying JVM is, but my ideal would be to have constant or near-constant parity.

Servlet API 4.x

After the JRE version, the most important foundational element of a Java web app is the servlet API release. The current version is 4.0.1, while Domino supports 2.5 (or 2.4, maybe?). The good news is that the Java/Jakarta EE world seems to be used to lagging versions here, and 2.5 is a minimum version for a lot of current tech in much the same way that Java 6 was until somewhat recently, but there has been quite a bit added in recent years.

Presumably, a reason for the lag is the implied requirements of newer versions, such as WebSockets and HTTP/2 support, that would require heavy modifications to the core Domino HTTP code. Honestly, the more practical route is almost definitely to just use a different JEE server paired with CrossWorlds, some Java wrapper for the GRPC stuff HCL has been talking about, or (best of all) a Darwino app replicating with Domino, but still. WebSphere Liberty is actually really nice, by the way.

Refreshed Equinox

Like with the underlying JVM, the Feature Pack 10 update to a Neon-based OSGi/Equinox framework was a critical shot in the arm for the platform, but it is now also two major versions behind. This is a little less critical, since Equinox brings a bit less to the table for our needs and Neon is "new enough" for now, particularly on the server side, but it'd still be proper to keep pace.

Odds and Ends

Non-OSGi JEE Support

The Equinox framework that Domino uses is quite capable, but there's no pretending that OSGi-targetted development has its share of headaches. Most Java apps just target plain-old .war files and don't impose any particular requirements on the build process. Java development is a much more pleasant experience when you can just toss in any Maven dependency and not have to think about building a target platform for Eclipse or jumping through bundle-resolution hoops. I really like OSGi in theory, but I can't pretend that non-OSGi development isn't much smoother.

Domino technically supports "regular" servlets currently, but, uh, here's a snippet from the current documentation on that:

Hrm.

Full Extension Manager Support for Java

JAVADDIN/DOTS added a lot of EM hooks, but it doesn't cover the full suite of capabilities that a C addin can provide, such as authentication handling. Having this be fully accessible from Java would be useful even when treating Domino just as a data store and not as an app server.

 


 

I'm sure I could come up with more, but that's probably good for now. All easy, right?

NSF ODP Tooling 1.2

  • Jun 12, 2018

I've just published a new release of the NSF ODP Tooling, and this one is important by virtue of the fact that it covers enough bases for me to put it into production with my largest active XPages project.

Since the 1.0 release, I've added a couple important new Maven plugin options in addition to general bug fixes:

  • "compilerLevel": by default, it compiles to the Domino server's Java version (currently 1.8 in the minimum required configuration), but now it is possible to specify 1.6 to target older Domino releases for production
  • "appendTimestampToTitle": append a timestamp to the database title during compilation, which is useful to see when going to deploy the NTFs to production
  • "templateName": set a name to be used in the $TemplateBuild shared field, which is a nice bit of fit and finish when making a template. This also sets the version (based on the Maven project version) and build date fields
  • "setProductionXspOptions": to enable compressed JS and resource aggregation in the compiled NSF, useful to use the inefficient options for development/debugging but get better performance in deployment

I've also gradually improved the Eclipse side, though that can use a lot more work. Just having the in-NSF Java classpath working is a huge boon for development and refactoring, and it'd be great to eventually have tooling available to create and edit design elements with some proper knowledge of how they work, to keep the metadata in sync.

As it is, this project has been tremendously useful for me so far, easing a big burden - I can't tell you how much time I've lost switching branches between a release candidate and develop, trying to coax Designer into properly picking up the changed files and recompiling, and then prepping the NSFs for deployment (even with tools to aid me). With the ODPs wrapped in a Maven tree, I have Jenkins take care of all of that for me, and more reliably to boot.

Another Project: XPages Jakarta EE Support

  • Jun 3, 2018

In my dealings with JNoSQL recently, I’ve been delving more into the world of modern Jakarta EE/Java EE/J2EE development, particularly the magic land of CDI.

The JEE stack tends to be organized as a collection of specs and implementations, many of which are really independent of each other and the underlying platform, making them pretty portable onto any reasonably-recent JVM. Now that Domino is actually on a reasonably-recent JVM, that makes it a workable target! So I decided to create a side project to bring some of JEE to XPages.

XPages has always been “sort of Java EE” - you don’t really have the full stack, and it’s far behind on the components that it does have, but a lot of the concepts are there. Of particular interest are managed beans and expression language.

CDI and Managed Beans

The XPages stack contains what amounts to a priomordial version of CDI. Since the release of XPages, JSF improved on the original faces-config.xml declaration method to add annotation-based declarations, and then CDI is something of a codification and expansion of that into the full Java world.

My project uses the Weld reference implementation of CDI to create a CDI context for each XPages app that opts in, allowing it to use annotations on classes to declare beans and properties:

@ApplicationScoped
@Named // or @Named("applicationGuy")
public class ApplicationGuy {
    public void getFoo() {
        return "hello";
    }
}

These can then be used like normal managed beans in an XPage:

<xp:text value="#{applicationGuy.foo}"/>

The project’s README contains some further examples.

I went with the Java SE implementation of Weld instead of the pre-built servlet or OSGi packages since those are a little too smart for this use: they pick up on the fact that they’re in a JSF environment, but expect newer versions of the servlet spec and JSF.

Expression Language

Since its original release, EL went through a similar standardization process as CDI and is now at version 3.0 and is distinct from JSP and JSF. As anyone who has tried to call a method on a bean in EL has found out, the XPages EL implementation lags pretty far behind, at the JSF 1.0/1.1 level. Since that time, it sprouted parameters and “projection” and is essentially a tiny scripting language now.

My project uses GlassFish’s EL implementation to outright replace the stock EL interpreter for apps making use of it. I added some affordances to IBM’s customized data support, so it’s intended as a drop-in replacement:

<xp:text value="${dataObjectExample.calculateFoo('some arg')}"/>

<xp:text value="#{el:requestGuy.hello()}"/> 

Note the “el:” prefix in the runtime-bound expression: that’s to get around Designer’s validation of runtime EL expressions.

So… Why?

That’s a good question! The first two reasons are “because it’s fun” and “to learn more about JEE”, but there’s also practical value for this sort of thing.

XPages is moribund, and that leaves Domino developers with a few options:

  • Go back to LotusScript. The iPad Notes client makes this a terrifyingly-practical option, but it’s soul death.
  • Go to JavaScript (or another platform). This is another route HCL is pushing, and it’s entirely valid: Node is a great platform with excellent support and momentum.
  • Go to modern Java.

For anyone who has invested a lot of time and brainpower in XPages over the years, that last one particularly appealing, and projects like this can help you get there. If you have a large XPages code base, as I do with one of my clients, it makes a lot more sense to work on that in such a way that it gradually becomes less XPage-dependent while avoiding the trap of a full rewrite in another language.

Many of us have already done something of this sort: JAX-RS is another JEE standard, and the Wink implementation in the Extension Library, though also aging, accomplishes this same sort of task. Especially if your services don’t reference Wink explicitly and write just to the spec, they are very portable.

That portability - of code and skillset - is critical. Say you have a class like this:

import javax.inject.Inject;
import javax.ws.rs.Path;
import javax.ws.rs.GET;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

@Path("/issues")
public class IssuesResource {
    @Inject IssueRepository issueRepository;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response get(@QueryParam("category") String category) {
        return issueRepository.find(category).stream()
            .map(this::doSomething)
            .skip(3)
            .collect(this::toResponse);
        }

​     // ...
}

Which Java platform is that targetting? What’s the data storage mechanism? Who cares? This class certainly doesn’t. That could just as easily be Domino reading from an NSF or (as is actually the case in the example’s source) Tomcat with Darwino.

What’s Next?

Truthfully, maybe not much. Though JEE contains a whole raft of technologies, these two were the ones that scratch my immediate itch. We’ll see, though - the skill portability of erstwhile XPages developers is critically important, and I think that this is another one of the paths that can get us where we need to go.

NSF ODP Tooling Example Project

  • Apr 29, 2018

To go alongside the first proper release of my NSF ODP Tooling, I've added an example project to the Git repository:

https://github.com/OpenNTF/org.openntf.nsfodp/tree/master/example

This project demonstrates how to use the tooling to create an XPages library project and build an NSF that uses it within the same Maven tree. This example project also serves as a reasonable template for the standard kind of project setup I make for Domino nowadays, minus a compile-time test plugin (which I'll probably add in eventually).

Environment Setup

Before building the ODP, you'll need to set up a compilation server and configure Maven to know about it. To start out with, make sure you have a Notes-ified Maven environment as described here. Since the IBM-provided update site is quite old at this point, it may be worth updating it from your local installation.

Next, install the Domino plugins on a Domino server running at least 9.0.1 FP10. It's best to do this on a pristine server without non-standard plugins installed, since part of the compilation process is to load and unload the needed plugins for your project. For my needs, I set up a Linux VM and it's doing the job nicely. Once that's set up, configure your Maven settings.xml to reference your compiler server, merging these values in with the normal Notes properties:

<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
    <profiles>
        <profile>
            <id>main</id>
            <properties>
                <notes-platform>file:///Users/jesse/Documents/Java/IBM/Notes9.0.1fp10</notes-platform>
                <nsfodp.compiler.server>someserver</nsfodp.compiler.server>
                <nsfodp.compiler.serverUrl>http://some.server/</nsfodp.compiler.serverUrl>
            </properties>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>main</activeProfile>
    </activeProfiles>
    <servers>
        <server>
            <id>someserver</id>
            <username>builduser</username>
            <password>buildpassword</password>
        </server>
    </servers>
</settings>

Note the "server" block at the bottom to provide login credentials. The plugins require a non-anymous user - at the moment, they allow ANY non-anonymous user, though, so you can create a new user just for compilation purposes. It's also good practice to encrypt your server connection passwords.

There are also options for deploying the NSF, but, since that's less important for our needs, I'll leave that aside for now. The README has a bit more information about that.

The Example Project

The structure of the projects is similar to the one I detailed in my Java series a couple of years back, but it's evolved a bit since then. The main aspects of it of note are:

The Folder Structure

Lately, I've been following the advice from the Vogella blog about how to structure an OSGi project. In a case like this, where there is only one plugin, one feature, and one update site, it's overkill, but I've found it to be better to create this structure at the start so you don't end up with a big mess of projects serving different needs within the same flat folder.

The nsfodp-maven-plugin

The most pertinent addition is the ODP wrapped in a Maven project. In the nsfs/nsf-example folder, I created a pom.xml to configure a project of type domino-nsf, which expects by default the ODP to be in an odp folder immediately within it. The ODP in there is entirely normal: it's just a near-fresh NSF exported from Designer, with the main addition being the inclusion of the XPages Library.

The project's pom does a few things: it establishes it as being a domino-nsf project and it adds some additional configuration to the nsfodp-maven-plugin, telling it to include the update site generated earlier in the build as part of the compilation process; this is what allows the NSF to build with the XPages library in the current project.

License Management

Since I've been making contributions for OpenNTF and particularly since taking over as IP manager, I've developed a much better appreciation for dotting my is and crossing my ts when it comes to licensing. One of the tools I quite like for this is the license-maven-plugin from com.mycila (it's not the only one of that name, and any of them should do the job). This plugin allows you to specify a license header template to be added to the top of each of your source files, which is an otherwise-tedious task that's easy to neglect. Once I added that to the root pom and set up appropriate exclusions (definitely make sure to add exclusions for any third-party code!), I'm able to run mvn license:format from the root project and have it run through all the source files in the directory tree and add an appropriately-formatted license header. I've definitely made this a standard part of my project setup now.

Plugin Versioning and maven-enforcer-plugin

This one admittedly doesn't have that much impact on day-to-day development, but it's another good "keeping your ducks in a row" addition: I've taken to explicitly specifying the versions for my Maven plugins, even when the version is implied by the core Maven version. This "locking down" can reduce one cause of mysterious code breaking if an implicit inclusion is upgraded in a way that is incompatible with your build process.

My friend in this task is the versions-maven-plugin's versions:display-plugin-updates goal, which will look through your project tree and find plugins that have newer versions in the available repositories and also tell you what your implied plugin versions are from the super-pom. I use this information to explicitly enumerate the plugins and find updates - the "copy and paste this block of XML" nature of Maven means that it's very easy to end up running a plugin that's several major versions behind.

Alongside this, the goal will tell you what the minimum required Maven version is for everything you're doing, which I've taken to specifying in the old-style prerequisites block as well as via the newer-style maven-enforcer-plugin route. Laying out these requirements explicitly is another good way to avoid phantom problems. Note that, for Eclipse, it's good to add a m2e configuration block to ignore the enforcer line, since m2e doesn't know what to do with it.

The OpenNTF Artifactory Server

This isn't so much a new technique as it is a prerequisite for using the plugin: currently, the plugin is hosted on OpenNTF's Artifactory server and not Maven Central, so you'll have to add a pluginRepository block for it. Once you have that, you can add the nsfodp-maven-plugin to the root pom.

The Eclipse Tooling

Once you have a project configured in Maven, the next step is to install the Eclipse tooling. This can be installed into any Eclipse installation running Neon or newer in a Java 8+ JVM. For my use nowadays, I primarily use Oxygen.3a on the Mac, but any platform should work.

Once you have that installed, you can import the projects into your Eclipse workspace and the tooling will adapt some elements for use as a pseudo plugin project. It will auto-generate MANIFEST.MF and build.properties files at the project's top level (which is why it's important to have the ODP in a sub-directory, so the FP10+ MANIFEST.MF isn't overwritten) and use those to configure the XPages Java class path, requiring the same plugins as the NSF does, including XPages library plugins, as well as adding any used Jars to the classpath. The result is that you can use it to edit your Java code with full classpath knowledge:

Eclipse Project

Beyond that, it provides some autocomplete capabilities for editing .xsp files. Currently, it had built-in knowledge of the controls that come standard on a Domoino 9.0.1FP10 server, plus any custom controls in your application. This autocomplete takes the form of contributions to Eclipse's standard XML editor, so it's pretty snappy:

Autocomplete

This works for tag names as well as component properties.

Limitations

This tooling has some significant limitations:

  • The biggest is that it doesn't have any special knowledge of most design elements - and, if you use binary DXL for safety purposes, that means that most legacy elements are difficult to modify.
  • It doesn't currently do any programmatic pairing between editable design elements and their associated .metadata and .xsp-config files, so it's best to do keep Designer around for creating those.
  • The XSP autocomplete consists just of contributions to the autocomplete list, and so it doesn't do any checking for legality of content or tag placement, nor does it currently have any descriptive metadata.

Future Prospects

I've gotten this project to a point where I can reduce my level of daily annoyance with my tools, which is an important step. There are a few more things that I'd like to add so I can further reduce my need to use Designer at all: giving the editors some knowledge of split files could allow for manipulating custom control properties in a better way, some property panes may be worth making, I'd like to have a way to modify binary-format DXL notes (which may either be by using ODA's CD structure implementation or by round-tripping the DXL to Domino to convert it to friendly format for editing), and I'd like to eliminate the strict requirement of having a Domino server around for compilation.

The last one is a fun project on its own: my plan is to have a headless Java app loading up an Equinox environment and using the same plugins and REST services as the Domino server. That's mostly functional now, but it has some odd compilation-time bugs that will take some investigation. With that in place, it'd remove the need for any separate server, though it would then require that your development environment have a Notes or Domino runtime available.

As always, I'd welcome any contributions, especially if someone has a particular itch they'd like to scratch. I have some open issues in the GitHub project that I likely want to tackle at some point, and I'm sure this could cover a lot more ground besides.

NSF ODP Tooling 1.0

  • Apr 28, 2018

A couple weeks back, I started a new project, and today I decided to declare it a 1.0. The premise of this project is simple: I really, really hate Designer. Since the original post, it's expanded into a set of tools that cover three main tasks:

ODP Compilation

The main impetus of the whole thing was to get a way to compile On-Disk Projects into working NSFs without using the extremely-fiddly Headless Designer route (and thus also decoupling the build process from Windows). In 1.0, ODP compilation works by installing a set of plugins on a Domino server (Windows or Linux), configuring some Maven properties, and wrapping the ODP in a Maven project. That process can upload dependent OSGi plugins and compile complex XPages apps as well as import the expected normal legacy Notes resources.

NSF Deployment

In addition to compilation, the Maven wrapper can be configured to deploy the NSF to a Domino server also running the plugins. This portion isn't as battle-hardened as compilation, but it seems to work, as long as your server ID is authorized to run the resultant XPages application, since that's how it'll be signed. Perhaps in the future I'll add the ability to sign with an ID out of an ID Vault.

Eclipse Tooling

In Eclipse Neon or above, the tooling adds a bit of knowledge about how to handle ODP Maven projects as well as some autocomplete capabilities for editing XSP source. Currently, autocomplete knows about the components that come with Domino 9.0.1 FP 10 as well as any Custom Controls inside the NSF, but in time I'd also like to include XPages-Library-contributed components. Additionally, it configures the project as a Plug-in Project with dependencies based on the selected XPages libraries and with source folders and embedded jars set up in the classpath.

The End Result

This project started out as just wanting to get Jenkins compilation working more reliably, but it's grown a bit to also allow for a smoother developer experience when working with NSF data in Eclipse. The "use case", for lack of a better term, that I'm aiming for is when you have the bulk of your code in OSGi plugins but have an NSF to maintain. I don't have any interest in replacing every editor from Designer, but I'd like to have enough that it's practical to do some basic XPages and legacy dev work without having to go over to Designer for everything. It's not fully there yet, but this version is a good start.

Getting The Code

The project is hosted on OpenNTF, and the source code is hosted on GitHub.

Compiling and Testing XPages Plugins With Java 9+

  • Apr 13, 2018

Thanks to 9.0.1 FP8, we've been able to use Java 8 on Domino for a while, and FP10 makes that support a bit more official at the OSGi level. However, Java 8 is no longer the latest Java runtime, and so anyone writing XPages plugins and compiling/testing them via Maven will likely run into a situation where the compiling JRE is 9 or above. There are a couple changes in these runtimes that add some wrinkles to the process, so I took up the task of working around the problems I hit, and I created a Git repository to contain the code and tips I used to do so:

https://github.com/OpenNTF/org.openntf.domino.java9compat

The README in the repo contains a couple bits of Maven configuration that can be used to successfully compile XPages plugins in Java 9 or 10, and the included project contains a patch fragment for the com.ibm.notes.java.api plugin that serves up the Notes.jar to work around the fact that CORBA has been removed from the standard JRE.

For my needs, those changes made me able to compile a reasonably-complex XPages application, but there may be other edge cases I haven't hit yet. As I do, I'll add information and code there.