XPages to Java EE, Part 2: Terminology

  • Jan 18, 2019

Much like with my earlier series, I think it'll be useful to take a minute to go over some of the terms that we'll need to know for dealing with Java EE, beyond just the many names of the platform. Additionally, I think it'll be useful to go over some of the things we specifically need to not know when it comes to non-OSGi development.

What To Leave Behind

Looking at my earlier vocabulary list, everything other than "JAR" and (unsurprisingly) "Java EE" can be eschewed from our brains for now. Bundles, plugins, update sites, "Require-Bundle", "Import-Package", all that stuff - forget it. That's all specific to OSGi and, while you can write Java EE apps in an OSGi environment (like XPages), it's very uncommon.

Unfortunately, even outside of XPages specifically, Eclipse conflates OSGi and non-OSGi development a lot, doing things like offering to modify the project classpath instead of OSGi metadata in plug-in projects and vice-versa in non-plug-in ones. It took me a while when getting up to speed on Java to figure out what was "normal Java", what was OSGi, and what was just an Eclipse-ism.

Fortunately, that separation will be made easier by our Maven focus. If it doesn't exist in the Maven project, it doesn't exist at all, regardless of what Eclipse says.

What To Keep

One of the promises of XPages at the start was that it would be a bridge to "normal" Java technologies, and, though imperfectly, it did achieve this goal. A lot of the concepts and technologies we encountered in XPages are either the same in stock Java EE or are historically related.

For one, the normal Java runtime is the same - all the classes starting with java.*, like java.io.List and whatnot. Those are part of "Java SE", and they'll come with you wherever you go in Java.

Additionally, XPages uses Servlet as its basis like most other Java web tools. In XPages, you can access things like the HttpServletRequest and HttpServletResponse by way of #{facesContext.externalContext.request} et al, and those objects are the same in a normal web app.

The "WEB-INF" folder that shows up inside an XPage'd NSF is also an EE-ism, and is the holding pen for "app stuff": configuration, dependency libraries, classes, and other bits all go in here. In an NSF, this is tucked away under "WebContent" (which I think is a semi-standard structural location for resources in uncompiled projects), but the idea is the same. "WEB-INF/lib" in there holds third-party jars, while the hidden-in-Package-Explorer "WEB-INF/classes" holds the compiled classes for the application.

Thanks to the Extension Library, we've also had a surprisingly-smooth introduction to one of the most-important current Java EE technologies: JAX-RS. The ExtLib packaged up Apache Wink, a now-defunct implementation of the standard, and made it pretty easy to build on with OSGi plugins. Even though the version of JAX-RS Wink implemented is a little old, the core concepts are the same, and so, if you ever walked down that path, that knowledge will serve you directly.

What To Learn

There's potentially a whole ton to learn, but we'll start with a couple core concepts.

  • WEB-INF/web.xml- The web.xml file is the traditional core configuration point for a Java web app. You can specify configuration parameters, Servlet mappings, filters, and other bits here - though, over the years, annotation-based improvements have made it so that this file is now strictly optional during development. XPages doesn't have one of these in the NSF (it has kind of a pseudo implied one in the aether), but xsp.properties and faces-config.xml are conceptually related.
  • Web Application - I've been bandying around this term, and it's essentially the name of the finished product you put on a server. In XPages, an NSF is the main Web Application analogue: it's a contained blob of code that has its own internal configuration and identity.
  • Servlet Container, Web Container, or Application Server - These are varying terms for the software that loads and runs the web applications. In our case, that's Domino and its HTTP stack; in others, that will be Tomcat, WebSphere, GlassFish, or the like. Domino is technically a servlet container in two areas: the ancient "Java Servlets" support that haunts our server configuration documents and help docs to this day, and the hacked-apart subset of WebSphere that runs the XPages side of things.
  • Specs and Implementations - For cultural and historical reasons, the Java EE platform itself is a set of specifications, and each of those has at least one implementation, and one of THOSE is dubbed the "reference implementation" (usually developed with the spec and often coming from Oracle). So JAX-RS is a spec, while Jersey, Wink, CXF, and RESTEasy are implementations. For the most part, you don't need to care about the implementation, but you might if you want an extra feature that the spec doesn't provide or (as we'll talk about eventually) are deploying to a "bare bones" servlet container like Tomcat. When deploying to a Java EE server, you normally just write to the spec and the server will include some implementation to back it up.
  • Persistence - "Persistence" in this context basically means "databases". The most common database connection scheme for Java EE is JPA (Java Persistence API, you see) using JDBC to connect to a relational database. For NoSQL databases, the incubating project JNoSQL aims to behave similarly, though it doesn't have critical mass yet. With Domino, we never really had a layer like this - we either dealt with the lotus.domino API or xp:dominoDocument data sources, and those are much "closer to the metal", offering no object mapping or event hooks.
  • MicroProfile - Eclipse MicroProfile is a project started a couple years ago to take several of the most useful Java EE specifications, add a few new tricks, and create a small and speedy target without the huge code and political overhead of Java EE. Since it was started, Java EE went to Eclipse as well, and now Venn diagram of the two is getting closer together: MicroProfile picked up another EE spec or two and EE got its act together and shed a lot of the obligatory baggage. It can be thought of now as an "opinionated" subset of EE that's purpose-focused on microservices.
  • CDI - I've talked a bit before about CDI, and it deserves another mention here both because of its importance to EE development and because of how weird and "magic" its behavior is. At its core, CDI is "managed beans with super powers". While managed beans began their life in JSF (I believe), they're so useful as a concept that they were brought down the stack to become one of the underpinning technologies. Where things get weird is that, beyond just saying "I have a session-scoped bean named 'foo' with type SomeClass", CDI covers auto-injecting instances of classes into other objects and, in some cases, auto-creating implementations of interfaces via proxy objects. It can get really strange really fast, but the basics will hopefully be clear when we get to examples.

Next Steps

I figure that two posts of theory are enough for now, so, in the next post, I'll go through some steps to cover the creation of a new Java EE 8 application.

XPages to Java EE, Part 1: Overview

  • Jan 17, 2019

I've definitely come around to the idea that the future for Java with Domino involves Java/Jakarta EE. HCL apparently feels the same way, though what that "J2EE" bit on their slide means remains unspecified. Regardless, I think that it's important for the XPages community to at least dip our toes into JEE proper, and I plan to share some of my experiences with doing so.

I think the best starting point here will be a bit of history and context. As XPages developers, we were dropped into a weird alternate version of this world, and kind of backed into a lot of its concepts, so it'll be useful to get a feel for where this stuff came from.

Before I get into it, I should point out the significant caveat that I am not a full expert in all of this. I wasn't paying attention to J2EE when it came into being, and there are still large swaths of it that I haven't had to bother with. In particular, I have only a loose grasp of the various turmoils of pricing and vendors over the years, but fortunately those parts aren't too important for getting started now.

Naming History

In 1999, Sun released the first version of JEE, dubbed "Java 2 Platform, Enterprise Edition 1.2". Historically, the versioning of Java has been pretty... well, stupid. Because Sun wanted to make the 1.2 release of Java sound like a big deal, they called it "Java 2" in branding but didn't actually bump the internal version number to match. Java EE matched this, starting out as "J2EE". This type of branding - "J2EE 1.4" - lasted until the fourth release, "Java EE 5" (yeah, I know). The platform is still habitually called "J2EE", but it means the same thing as "JEE".

In 2017, after a couple years of neglect, Oracle decided that they didn't want to be bothered shepherding the platform anymore, and they did the honorable thing and open-sourced it to Eclipse. Since Oracle still maintains the "Java" name, that led to a bit of a scramble to come up with a new name for the platform. The initial name was "EE4J", and that remains the official name of the Eclipse project overseeing the whole thing, as well as the name of the specific reference implementation. After polling the community, though, the name "Jakarta EE" was chosen for the new version of the Java EE standard.

In short, though there are technical differences at each point, the gist of it is that "J2EE", "Java EE", "JEE", "EE4J", and "Jakarta EE" all kind of refer to the same thing.

The Core Meaning

The Java EE platform covers a lot of things and isn't strictly tied to web applications alone, but it effectively means "Java web stuff". For writing the types of web applications we're likely to run across, there's a whole swath of Java EE technology that we'll ignore - stuff to do with the giant, bloated-yet-fragile apps that we learned to associate with WebSphere in the bad old days.

Pricing History

As an "enterprise" offering, Java EE used to involve writing giant checks. You'd pick your vendor, send them a dump truck of money, and they'd give you an application development environment and a team of consultants to install it.

Over the years, things got a lot better. The licensing on the specifications was/became such that open-source versions of core components gradually became available, and then eventually even the big-ticket application servers went open source in various forms and to various extents.

While there used to be a huge hurdle to getting started, we're living in a comparative golden age where you can get top-tier stuff for production use easily and for free.

XPages's Relationship

XPages is effectively a fork of a specific set of Java EE technologies. The most important of this is JavaServer Faces, but it has a couple others in there: Servlet, JavaMail, JAX-RS (in the ExtLib), a janky version of JSP, and probably a grab bag of smaller technologies.

So XPages is Java EE and Domino is a Java EE server in that sense, but its historical division and the presence of OSGi make it so that you can't necessarily just jump in with current JEE development and deploy it to Domino. Some bits are easier than others (like JAX-RS), but everything has an asterisk.

Moreover, the specifics of XPages force us to "un-learn" some things that we learned while getting deeper into Java on Domino. OSGi is the big one - though it still exists, particularly in Eclipse, it has limited adoption for web apps. Additionally, the "develop live in the NSF" methodology, direct pairing of app + storage, and total lack of persistence framework for Domino mean that a lot of our ingrained habits run counter to what we'll learn in the future.

The Plan

Currently, I have only a loose plan in mind for this series. I expect I'll have another post or two of "conceptual" stuff before going into showing some actual code. For the most part, I expect the code will start where the Java Thing Series left off - not with picking up that code specifically, but with the starting point of Maven and Eclipse.

SNTT(uesday): Stepping Up My Tycho Game

  • Jan 15, 2019

I'm always on the lookout for ways to improve my projects' build process to get more-convenient results, cut down on IDE/compiler complaints, or to generally reduce the amount of manual work.

In the last couple weeks, I've figured out two changes to make that clean up my setup nicely: better source bundles and easier update sites.

Source Bundles

In OSGi parlance, a "source bundle" is a companion bundle/plugin for a normal bundle that contains the source code associated with it - for example, org.openntf.domino is paired with org.openntf.domino.source. With a bundle like this present, an IDE (Designer included) can pick up on the presence of the source code and use it for Javadoc and showing the original source of a class. It's extraordinarily convenient, rather than having to reference the source online or in another project (or not at all).

For a while, I've configured my Tycho projects to automatically generate these source bundles during build, and then I have ".source" features that reference them, which are then included in the final update site. This works very well, but it leaves the nagging problem that Eclipse complains about not being able to find the auto-vivified source bundles, and it also requires either putting the source bundles in the main features (which is a bit inefficient in e.g. a server deployment) or maintaining a separate ".source" feature.

It turns out that the answer has been in Tycho all along: instead of just generating source bundles, you can tell it to generate entire source features on the fly. You can do this by using the aptly-named tycho-source-feature-plugin:

<plugin>
	<groupId>org.eclipse.tycho.extras</groupId>
	<artifactId>tycho-source-feature-plugin</artifactId>
	<version>${tycho-version}</version>
	<executions>
		<execution>
			<id>source-feature</id>
			<phase>package</phase>
			<goals>
				<goal>source-feature</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<includeBinaryFeature>false</includeBinaryFeature>
	</configuration>
</plugin>

With this, the build will auto-create the features as it goes, including pulling in the source of any referenced third-party bundles, and then you can include them in the final update site. For example, if the feature you're building is com.example.foo.feature, you can include com.example.foo.feature.source in your output.

Eclipse Repositories

Historically, the way Domino-targeted update sites are built is that they're referred to as the project type eclipse-update-site, which takes a site.xml and turns it into the final update site. This works well enough, but it has a couple problems. For one, it's deprecated and ostensibly slated for removal down the line, and it's best to not rely on anything like that. But otherwise, even when it works, it's fiddly: if you want to, for example, bring in a third-party feature, you have to explicitly specify the version of the feature you're bringing in, rather than letting the build environment pick up on what it is. This can turn into a drag over time, and it's always felt like unnecessary maintenance.

The immediate replacement for eclipse-update-site is eclipse-repository, which is very similar (you can "convert" by just changing the project type and renaming site.xml to category.xml) and solves the second problem. In a category.xml file, you can specify just the feature ID, leaving the version out or specified as 0.0.0, and it'll figure it out during the build.

However, this has a minor down side: though Designer can deal with these repositories without issue, the NSF Update Site template doesn't know about the generated artifacts.jar and content.jar files. You can use "Import Features", but that loses the feature categories, which are very useful when maintaining a large update site.

Fortunately, the site.xml format is extremely basic, so I created a Maven plugin a while ago to auto-generate one of these files. I improved it yesterday to pick up on the categories specified in the original category.xml file. This let me tweak the eclipse-repository project to shim in this generation before the final packaging:

<build>
	<plugins>
		<plugin>
			<groupId>org.darwino</groupId>
			<artifactId>p2sitexml-maven-plugin</artifactId>
			<version>1.1.0</version>
			<executions>
				<execution>
					<id>generate-sitexml</id>
					<goals>
						<goal>generate-site-xml</goal>
					</goals>
					<phase>package</phase>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.eclipse.tycho</groupId>
			<artifactId>tycho-p2-repository-plugin</artifactId>
			<executions>
				<execution>
					<id>archive-repository</id>
					<goals>
						<goal>archive-repository</goal>
					</goals>
					<phase>package</phase>
				</execution>
			</executions>
		</plugin>
	</plugins>
</build>

Now it's sort of a "best of both worlds" deal: I can use the non-deprecated form of the repository and its improved features, while still using the stock NSF Update Site.

This Maven plugin is in OpenNTF's Maven repository, so you can add it in by adding the repo to your root project's pom:

<pluginRepositories>
	<pluginRepository>
		<id>artifactory.openntf.org</id>
		<name>artifactory.openntf.org</name>
		<url>https://artifactory.openntf.org/openntf</url>
	</pluginRepository>
</pluginRepositories>

 

Letting Madness Take Hold: XPages Outside Domino

  • Jan 7, 2019

(Opening caveat: unlike some of my other recent dalliances, I don't plan to actually do anything with this one, and it's more of a meandering exploration of the XPages platform)

Since I've been on a real Open Liberty kick lately, over the weekend I decided to go another step further and test something I'd been wondering for a while: whether it'd be possible to run the current form XPages outside of the Domino HTTP stack.

I say "the current form" because XPages's history is long and winding, and led a fruitful life for a long time before being glommed onto Domino at all. If you poke around the core, you can see it bears all the scars of its life: references to WebSphere Portal abound, half of the plugins that make up the runtime are just thin OSGi wrappers around plain old Jars, and all of the "Domino" bits are clearly labeled as "adapters".

Still, it's been over a decade since the stack was intended to run anywhere outside Domino, and that's a lot of time for ingrained assumptions about nHTTP specifically to creep in. Still, I was curious if it was possible to load it up outside of Domino and without OSGi.

Short Answer

Yep!

Long Answer

There are a couple things that contribute to making this setup practical, and they each bear some expansion.

Platforms and Execution Contexts

At a couple levels, the runtime breaks things up into generic concepts of "Platforms" and "ExecutionContexts" to handle some specifics about context directories, class loaders, and other bits. For example, if you get a type hierarchy on com.ibm.commons.Platform in Designer, you'll get a pretty immediate idea of what's going on:

OSGi/Services Bridge

Anyone who has written an XPages Library plugin is familiar with the concept of an OSGi extension: you declare your extension (for our purposes, and usefully, com.ibm.commons.Extension) in plugin.xml and then the environment picks up on it by the code looking for such extensions. The core Java runtime has a similar mechanism - ServiceLoader - that looks for files with the name of the extension type in the META-INF/services directory in your classpath. The result of both is the same: individual Jars/plugins can declare services and some other part of the app can pick up on them without knowing the specifics.

XPages uses IBM Commons's generic "Extension" type to paper over the differences between these, and the runtime will look for both or either depending on where it's working. And here's another part that conveniently still retains the vestiges of its youth: if you look inside the com.ibm.xsp.core plugin (since it's just a ZIP file), you can see these extensions declared both in the top-level plugin.xml and as individual files inside the embedded Jar:

So, if you load in these inner Jars as normal Maven dependencies in a .war file, the services will still tie together in much the same way, at least for the core runtime. Things get less convenient the newer the code is, though: the Extension Library, for example, primarily uses plugin.xml for its services, and so either an adapter runtime would have to look for this or you'd have to re-declare them in the "normal" way.

Light OSGi Use

Speaking of OSGi, that's one of the big potential stumbling blocks. XPages nowadays expects to run inside an Equinox container, and so a lot of code (say, the Dojo plugins) make assumptions about the loading of Activator classes and other things. These need some patching. Fortunately, the actual use of OSGi in most of these cases is extremely light: mostly, it's about instantiating these activators and then getting bundle class loaders. For basic needs, these can just be shimmed in: find the (blessedly public) static instance property in the applicable classes and put in small BundleContext+Bundle adapters that just return the context class loader. I'm sure there are bits that run deeper than that, and long-term it'd probably be more practical to just fire up Equinox, but this works for now.

FacesServlet

The core work of rendering an XPage runs through the class FacesServlet and more specifically DesignerFacesServlet (as a side note, I've gathered that seeing "Designer" in these classes refers most likely to "Lotus Component Designer", since those parts of the stack enter in before the Domino dependencies). In a modern JEE context, this'll take a little bit of wrapping, since it implements Servlet but doesn't extend HttpServlet, but not too much. For the most part, once you have your platform set up above, you can make a standard @WebServlet-annoted class and delegate the HttpServletRequest and HttpServletResponse objects to one of these, and it'll pick up on any compiled xsp.PageName classes in your .war:

@Override
public void init(ServletConfig config) throws ServletException {
  this.delegate = new DesignerFacesServlet();
  delegate.init(config);
}

@Override
protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {	
    delegate.service(new LibertyServletRequestWrapper(req), resp);
}

 

DesignerGlobalResourceServlet

Alongside the core Faces servlet, the DesignerGlobalResourceServlet does the work of, well, serving up global resources. This one's simpler than the Faces servlet, since it is indeed a fully-fledged HttpServlet. You could just declare this in your web.xml, but I like extending these classes in case I want to fiddle with them later:

@WebServlet(urlPatterns="/xsp/.ibmxspres/*")
public class LibertyGlobalFacesResourceServlet extends DesignerGlobalResourceServlet {
	private static final long serialVersionUID = 1L;
}

The NSF Part

Up until now, what I was able to create was a way to run XPages inside a normal web app without any real connection to Domino (other than pulling in the binary plugins). Actually running an existing XPage out of an NSF requires a little more bootstrapping, and unfortunately confines the page a bit.

Specifically, the most expedient route I found to accomplishing this was to fire up an LCDEnvironment object and ferry requests for NSF-hosted apps to this. With the presence of an active Notes runtime (via NotesThread.sinitThread() and bringing in Notes.jar and the NAPI plugin), LCDEnvironment#initialize will do a lot of legwork in assembling its own little world inside your application. It will look for com.ibm.designer.runtime.domino.adapter.HttpService declarations and bring them in, including the vitally-important NSFService.

The nice part of this is that it does a ton of work, handling not just XPages requests, but also in-NSF resource requests. The down side is that the NSFService does its work by heavily wrapping the environment, down to providing a servlet context that declares itself as 2.4 even in a 4.0 runtime. Still, a bit of code in the service method gets it working nicely:

String contextPath = StringUtil.toString(req.getContextPath());
String path = req.getRequestURI().substring(contextPath.length());
RequestContext requestContext = new RequestContext(contextPath, path);
HttpSessionAdapter sessionAdapter = new ServletHttpSessionAdapter(req.getSession());
HttpServletRequestAdapter requestAdapter = new LibertyServletRequestWrapper(req);
HttpServletResponseAdapter responseAdapter = new ServletHttpServletResponseAdapter(resp);
lcdEnvironment.service(requestContext, sessionAdapter, requestAdapter, responseAdapter);

Those adapter/wrapper classes really just delegate the calls, but they're needed because that's what LCDEnvironment expects. I imagine those interfaces exist to create a consistent environment in lots of situations without even tying to the stock HttpServlet* classes. In general, the XPages stack loves adapters.

So, Is This A Good Way To Run XPages?

Nnnnnnnnnope! I mean, not really. Particularly in the first route, where you load up an XPage without any knowledge of the NSF part, you have some intriguing paths to interact with the surrounding Servlet 4.0 environment directly from the page. However, I don't know why you would do this. You could hypothetically create some components or hooks to allow use of, say, Web Sockets, but you'd be better off just using current JSF for that, since the work is already done. The in-NSF XPages runtime adds some extra barriers to that, too... I'm sure it'd be possible to provide a path to it, but, again, you'd be better off using existing tech.

Additionally, this isn't a way to bring XPages "home" to JSF. The JSF API and implementation that XPages uses isn't merely old, but hacked to pieces: if you look at javax.faces.component.UIComponent, you can see it's riddled with "_xsp" methods, indicating a thoroughly-unclean layering. You wouldn't be able to shim in the current JSF without forking it into a distinct project.

Still, it's nice to know it's possible, and it sure was a fun project to tinker with. I do admit that the notion of building an XPages app using MVC 1.0 with XPages as the view technology is a little tantalizing, but it's certainly not worth traveling down that road on the back of a chopped-up hacky rework of the platform. And the only reason it's tantalizing is that I'm still more comfortable with XPages than any of the other front-end stacks, and that doesn't itself make it a good fit. Fun to think about, though.

New Project: Domino Open Liberty Runtime

  • Jan 3, 2019

The end of the year is often a good time to catch up on some side projects, and this past couple weeks saw me back to focusing on what to do about our collective unfortunate situation. I started by expanding the org.openntf.xsp.jakartaee project to include several additional JEE standards, but then my efforts took a bit of a turn.

Specifically, I thought about Sven Hasselbach's series on dropping Domino's HTTP stack while still keeping API access to Domino data, and decided to take a slightly-different approach. For one, instead of the plucky-but-not-feature-rich Jetty, my eye turned to Open Liberty, the open-source variant of WebSphere Liberty, which in turn is the surprisingly-pleasant trimmed-down counterpart to WebSphere. Using Liberty instead of Jetty means getting a top-tier Java EE runtime, supporting the full Java EE 8 and MicroProfile 2.1 specs, developed by a team chomping at the bit to support all the latest goodies.

Additionally, I decided to try launching Liberty from a Domino plugin, and this bore fruit immediately: with this association, the Liberty runtime is able to fire up sessions and access databases as the Domino server without causing the panic halt that Sven ran into.

So, in short, what this project does is add a fully-capable Java EE server with all the fixings - the latest JEE spec, HTTP/2, Servlet 4, WebSockets, and so forth - running with native access to Domino data alongside a normal server, and with the ability to manage configuration and app deployment via NSFs. Essentially, it's like a second HTTP stack.

Why?

I made some good progress in bringing individual JEE technologies to XPages, but I was still constrained by the core capabilities of the XPages runtime, not the least of which was its use of Servlet 2.4, a standard that went obsolete in two-thousand-freaking-five. Every step of the project involves fighting against the whole underlying stack, just to get some niceties that come for free if you start with a modern web container.

Additionally, while Domino has the ability to run Java web applications, this support is similarly limited, providing very few of the standards that make up Java EE and even apparently lacking a JSP compiler set up on the server. It's also, by virtue of necessarily wrapping the app in an OSGi bundle, much fiddlier to develop than a normal WAR file.

And, in a general sense, I'm tired of waiting for this stack to get better. Maybe HCL has grand plans for Java development on Domino in the future - they haven't said. I still doubt it, in part because of the huge amount of work it would entail and in part because I'm not sure that improving XPages would even be strategically wise for them. And say they did improve XPages in a lot of ways people have been clamoring for - WebSockets and whatnot. Would they cover all of the desired features? What about newly-emerging technologies from outside? Their Node.JS strategy makes me think they've thought better of being the vendor of a full-stack web technology.

This route, though, provides a route to making web apps with current standards regardless of what HCL does with XPages. This way, you can work with the entire Java web community at your back, rather than cloistered off with unknown technology. If you want to make an app with Spring, you can, following all of their examples. If you'd rather use PrimeFaces, or just JAX-RS, or JSP, you can do so just as easily. And if your chosen technologies go out of favor, you'll be in the same boat as countless others, and the new preferred choices will be open to you.

Finally, there's just the fact that Java EE 8 is really, really good. The platform made tremendous strides since the bad old days, and developing an app with it is a revitalizing experience.

How?

To set this up, I deliberately chose a very low-integration path: the task in Domino unzips a normal Open Liberty distribution and then runs it using Domino's JVM, just using the default bin/server script. No embedding, no shared runtime. This way, it doesn't have to fight against any constraints that Domino's environment imposes (such as the fact that both Domino and Liberty want to run an OSGi environment), and it doesn't lead to a situation where a crash in Liberty would bring down Domino's HTTP.

The rest kind of comes along for the ride. Since it's running with the Domino JVM, it already has the trappings needed to use Notes.jar, so it's really just a matter of using the classes and making sure you run inside a NotesThread or otherwise initialize and terminate your thread.

Future

Assuming I keep with this project (and I think I have some for-work uses for it, which dramatically increases its odds), I have some ideas for future improvements.

I've added a basic HTTP reverse proxy servlet, and I plan to make it more integrated. The idea there is to allow Liberty to be the primary HTTP entrypoint for Domino, with anything not handled by a web app it's hosting to pass through transparently to Domino.

In time, I aim to add some more integration, such as CrossWorlds and general utilities. I've started by adding in a basic user registry, allowing JEE-standard apps to authenticate against Domino without extra configuration (though it doesn't currently do groups). That could be expanded a good deal - Liberty could read SSO tokens using the C API (or share LTPA as WebSphere normally does), and it'd be nice to have a reasonable method for sharing non-SSO DomAuthSessId cookies.

The Project

I set up on the project on GitHub: https://github.com/OpenNTF/openliberty-domino . I think there's some definite promise with this, especially once there are a couple example apps that could show off the possibilities.

Learning to Appreciate IntelliJ

  • Nov 22, 2018

I'm a big fan of Eclipse, both the IDE and the foundation - I like how organized they are, I like that they're stewarding Jakarta EE, I'm ostensibly a committer, and I even have an Eclipse T-shirt. I'm all in!

However, I have to admit that the IDE has been grating on me for a lot of my work lately. The two big ways are the ever-present troubles when working with OSGi projects and its surprising UI slowness when working with a lot of projects in the workspace. Though it's sitting on top of an eight-core Xeon, it just drags - expand/collapse animation is slow, the editor freezes up frequently when trying to autocomplete, and it's just a dog. I had high hopes that Photon's push for speed would alleviate this, but it's still a drag. I think it's probably faster on Windows, but that's just not worth contemplating.

Moreover, my work with Darwino involves Android apps, and Android development in Eclipse just completely lost steam after Google switched to IntelliJ. The Andmore project aimed to keep it alive, but you can see how far that went just by looking at the release list. It can still kind of work, but it's very much swimming against the current.

So I've been on the lookout for alternate IDEs for some or all of my work. Visual Studio Code has been a strong contender. I've already switched to it for most JavaScript work and it has decent support for Java projects by way of the Java Language Server. However, despite that project being run by Eclipse, it doesn't support Tycho projects, and so I can't use it for full OSGi development. Moreover, though it's very capable, Java support clearly isn't its main focus, and so it won't cover all the same bases as a "proper" Java IDE.

But looming in the corner the whole time has been IntelliJ IDEA. I'd used it here and there for Android apps, and it was okay, but it always rubbed me the wrong way. Part of that is just muscle memory from Eclipse - any different IDE is going to be inherently annoying in some way just because it's different. Some of it was also runoff resentment from working with Android, so I could discount that too. It doesn't help that the UI is written in Swing, and so, even with its Mac-ish coat of paint, it still feels like running some kind of weird Linux app. Eclipse isn't exactly fully Mac-like either, but SWT carries it much further along.

But still, I figured I should give it a real shot. Lots of people swear by it, especially as converts from Eclipse, so it has to do something right. As it turns out, it does quite a bit right.

Speed

So far, I've found it to be significantly snappier than Eclipse for the kind of work I'm doing, and without all the UI blocking that's been plaguing the latter for me for a while. Autocomplete happens quickly and consistently and doesn't freeze the UI, project import processing happens quietly on background threads, and everything just feels smooth.

Java Intelligence

IntelliJ's Java support carries quite a bit of cleverness, and JetBrains made great selections for what to enable by default. It's particularly good at what Eclipse refers to as "code mining", where it will inspect your code and add little visual annotations to make what you're doing more clear. For example, I love what it does with "stream-style" chained code:

IntelliJ "stream" code intermediate classes

It has all sorts of little niceties like this, and does pleasant things like compressing pre-lambda functional-interface calls to look like lambdas even in Java 6 projects. And another favorite: pseudo named parameters, automatically showing when it's most useful for clarity:

Pseudo named parameters

It's also pretty good at recognizing legal-but-inadvisable idioms and offering in-place conversions. For example, if you write this:

String bundles = osgiBundleList.stream().collect(Collectors.joining(","));

...it will have a quick action to convert it to this instead:

String bundles = String.join(",", osgiBundleList);

This sort of thing isn't unique to IntelliJ (Eclipse has some of this and there's no shortage of code-linting tools), but it does a particularly good job integrating it in a useful and non-obtrusive way.

OSGi

Though Eclipse is something of a flagship user of OSGi and so much of the IDE is geared around plug-in development, it sure does make it a real PITA to use sometimes. In particular, the single active Target Platform per workspace, configured manually in the preferences, is a huge hurdle when dealing with multiple diverging OSGi projects.

IntelliJ handles this much better for my needs. Primarily, it does this by letting its OSGi plugin (dubbed Osmorc) construct the platform per-project from whatever the project's dependency mechanism is. In the case of Tycho, this means that it will pick up on any p2 repositories specified within the project (such as the ${notes-platform} one commonly used for XPages libraries) and find the dependencies in much the same way that Tycho itself does. This means you don't have to worry about hand-holding the IDE to make it do a job that the build system already knows how to do.

I think that you wouldn't want necessarily want to use this to run and debug full-fledged Eclipse RCP applications, though I imagine you could via Maven goals if nothing else. For my needs, though, it seems like it hits exactly the right level of support.

External Annotations

I became a convert to null analysis a while ago, and I like to stick with it as much as possible. One of the things that makes it a little awkward, though, is that the core JDK classes - String, etc. - don't have any annotations. Most checking tools, Eclipse included, support the concept of "external annotations": a separate definition file that can tell the compiler that, say, String.valueOf(...) will always return a non-null value without having to modify the compiled classes.

Where IntelliJ's pleasantness comes in is that automatically applies a set of external annotations to the JDK, whereas Eclipse is more of a bring-your-own sort of thing. I never bothered figuring out the right way to do it in Eclipse, and so it's great to just have that set up for me without having to think about it.

JavaScript/TypeScript/etc.

I haven't worked too much with JavaScript apps in IntelliJ yet, but I know that WebStorm is highly regarded, and I sprung for the Ultimate edition, so it's sitting there for me anyway. Just a quick glance shows that it's much more comfortable than Eclipse: I have npm-based JS projects wrapped in Maven projects, and it picks up on both aspects just fine. I can open up a JSX file and it properly builds the class hierarchy, finds dependencies, and all that. Eclipse, on the other hand, hasn't seemed to update its JavaScript editor since about 2001, and the third-party plugins are a mess of limited functionality, incompatibility, and extra cost. It would sure be nice if I can use IntelliJ for my JS dev instead of having to hop between it and VS Code (as nice as VS Code is).

The Future: Plugin Development

I'll go into this more in my next post, but my trial-run project to get comfortable with IntelliJ was to port the run-from-workspace aspect of the XPages SDK to IntelliJ, and I did so successfully. It turns out that the plugin mechanism is characteristically clean and straightforward, at least for what I need to do, and I expect I'll be doing similar work for Darwino.

So, as much as possible, I think I'm going to try living in IntelliJ for a while and see how it treats me. So far, so good, in any event.

Java Hiccups

  • Nov 7, 2018

To take a break from the doom-and-gloom of my last post, I figured it'd be good to dust off a post idea I've had in my drafts for a while: common hiccups that Java developers - particularly those coming from a Domino background - run into. This is sort of a grab bag of non-obvious concepts that are easy to assume incorrectly about, whether because of the way other languages work or the behavior of the lotus.domino API specifically.

So, roughly in order of complexity:

import Is Just For Cleanliness

In many languages, in particular C/C++/Objective-C, the natural equivalent of Java's import statement has a massive effect, physically grafting files into your source. In Java, though, import is really just for developer convenience. At runtime, there's no difference between having written this:

import java.util.ArrayList;
import java.util.List;

/* snip */
List<String> foo = new ArrayList<>();

...or this:

java.util.List<java.lang.String> foo = new java.util.ArrayList<>();

If you import a class but never use it in a file, it won't have any effect on the runtime behavior of the class. It's just used by the compiler to clarify what you mean when you use a base class name without reference.

Incidentally, as seen here, classes within the java.lang package (but not subpackages) are auto-imported, so it's as if each Java file has an invisible import java.lang.* at the top.

Compilation Doesn't Bake In Libraries

This is related, and is also an area where Java differs from some other environments. With C et al, you have the option to statically link referenced external libraries - which is to say, grab their contents at build time and put them into your compiled result such that they may as well be part of your program. Java doesn't do this: every time you reference a class or method, it's really just storing the equivalent of the string name of that class, which is then resolved at runtime.

This is why it's very easy to run into a ClassNotFoundException: you can compile code with some classes present on the class path, but then run it in a system where they're not present. The Java runtime doesn't pre-check whether all the required classes are available when it starts running, so you only find out when it hits that line of code.

Different Java-based environments deal with this in different ways. Standard (non-Domino) web apps deal with this by including dependency jars inside the WEB-INF/lib folder during the packaging phase of building. OSGi, XPages's framework of choice, has a whole dependency mechanism where you can specify bundles or packages with version ranges, in the hope of bringing some order to the chaos, with mixed success.

Primitives Are A Thing

Though the term "primitive" means a built-in data type generally, here I'm specifically talking about things like int and double. For historical and performance reasons, Java has a conceptual and practical distinction between the objects that you deal with most of the time and the primitive types used mostly for number storage. Namely: byte, char, short, int, long, float, double, and boolean. Unlike object references, there is no concept of a "null" value with these that would cause a NullPointerException. Referring to a variable with one of these types will always contain some value if the code compiles, even if it's just the 0/false default for an object property when not otherwise initialized.

Each of these types has a corresponding "boxed" object version, generally with the un-abbreviated name capitalized, such as Byte and Integer.

The distinction used to be harsher than it is today, thanks to autoboxing. Autoboxing is a compile-time behavior that will automatically convert between the primitive types and their object holders as necessary, allowing this type of code, which would otherwise be illegal:

Object i = 3;
int j = new Integer(4);

Autoboxing is mildly inefficient, so it's good to know that it exists, but you don't normally need to lose sleep over it.

All Object Variables Are Pointers

In a language like C++, there is a distinction between a variable that "is" an object vs. one that is a pointer to an object somewhere in memory. In Java, however, the former doesn't exist: an object variable is only ever a pointer. This has a couple implications. For one, this code only deals one object:

SomeClass foo = new SomeClass();
SomeClass bar = foo;
bar.setName("hi");
foo.setName("hello");
bar.getName(); // Will be "hello"

This is also why Java is picky about not referencing object variables until they've been initialized to at least something, so this generates a compile-time error:

Object foo;
foo.getClass();

Unfortunately, unlike some languages, Java has no language-level support for enforcing the distinction between a null object reference and a non-null one, which is why NullPointerExceptions are so prevalent.

Another implication of this leads into its own hiccup common to LotusScript programmers:

Strictly Speaking, All Method Arguments Are "By Value", But...

All method parameters in Java are "by value" in the LotusScript sense, the fact that all object variables are pointers means that the "value" you're passing to the method for an object parameter is always a reference. Java has no mechanism to pass a reference to a primitive type, nor does it have a mechanism to implicitly duplicate an object when passing it to a method.

Not only is this a bit conceptually confusing at first, but it's also a potential trap for bad programming practices. It's very easy to write a method that performs modifications on objects passed in as parameters, and this is often the right thing to do. However, since the language doesn't have any syntax mechanism for broadcasting this behavior, it's up to you as the programmer to either write the method name in such a way that it's obvious what's going to happen or clearly state it in the documentation if it's something that's going to be used outside the current file.

Casting Objects Doesn't Do Anything

By "casting", I'm referring to something like this:

RichTextItem body = (RichTextItem)doc.getFirstItem("SomeItem");

The (RichTextItem) is a cast, and it's yet another area that diverges from some other languages. What casting an object in Java means is that you're going to refer to an object by a different class or interface than the one it's been previously referred to as. It has some runtime implications, but the thing to keep in mind is that it's about choosing the name for something that exists as opposed to changing an object into a different class.

So, for example, the RichTextItem idiom above exists because the Document#getFirstItem method returns an object that's called a Item, but, when the item in the document is rich text, it will actually return a RichTextItem object. RichTextItem is a subclass of Item, and so it's legal to refer to such an object either as RichTextItem, Item, Base (the common interface for all Notes objects), or Object (the common superclass of all objects). In a situation like this, you have to cast the object because you're going to refer to it as a more-specific type of object than the one the method says it returns.

If you do this and the object is not actually of the type you're trying to cast it to (in this case, if it's a plain text item or MIME, most commonly), you'll end up with a ClassCastException, because the cast is enforced at runtime. But, success or fail, the cast will not actually affect the object itself in any way - it will continue on being whatever it was already, regardless of name.

Casting Primitives Does Do Something

For better or for worse, performing a cast on a primitive type does have the possibility of creating a new value. For example:

int foo = Integer.MAX_VALUE;
short bar = (short)foo;

Because int can hold more data than a short, this case creates a new value based on chopping off the highest-value bits of the internal binary representation of foo. (As a side note, because of the fun way computers deal with numbers, foo is 2147483647, while bar is -1.)

Normally, this behavior doesn't matter too much, since, if you have a method that takes, say, an int and you have a long, you can safely cast it down since it'll likely be a tiny value anyway. It's important to know that it can happen, though, and this behavior is very important when, for example, native C libraries that use unsigned values, which do not exist in Java as such.

Java Has Only A Limited Concept of Immutability

"Immutability" refers to the inability to change the value of an entity once it's created. It's come to the fore as a concept recently because working with immutable objects sidesteps a lot of issues with asynchronous programming. Java, unfortunately, doesn't really have any language-level support for immutable objects in the sense that, for example, Swift does.

Java throws a bit of a curve ball in this area with the final keyword, which means that a variable can't be reassigned after being first initialized. This means that you can't do things like this:

final int foo = 3;
foo = 4; // compiler error

final SomeClass bar = new SomeClass();
bar = new SomeClass(); // compiler error

This, on the other hand, is entirely legal:

final SomeClass foo = new SomeClass();
foo.setName("hi");
foo.setName("hello");

This is because the only thing blocked from changing here is the value of foo-the-reference, but the object it's referencing can be changed at will.

An object can be made effectively immutable, though, by means of making its outward-facing methods not change any of the internal state. This is used commonly for "value" classes, such as the aforementioned Integer. Though the language doesn't do anything to guarantee that the Integer class doesn't allow mutation, the class is written in such a way that it has no inlet for it.

Because of the value of immutable objects, they're used commonly in the core Java classes and in third-party libraries, particularly newer ones. However, since the language can't tell you if an object is immutable, you have to be on the lookout for whether a given method modifies the existing object in-place or returns a new object reflecting the change. This comes up frequently with Strings, which are immutable in Java. This is something I've seen commonly:

String foo = " hello ";
foo.trim();
System.out.println(foo);

That code will print " hello ", with the leading and trailing spaces (though without the quotes). This is because the String#trim method, like all "changing" methods on String, leaves the original value intact but returns a new String object reflecting the expected value.

This is just something you have to be on the lookout for, especially since this pattern isn't even consistently applied within the core Java classes. The Date class, for example, is infamously bad in a lot of ways, and one of those ways is that it has mutation methods.

Generics In Java Are Weird

A "generic" refers in this case to a class that is declared as being associated with one or more other types that can be defined after the fact. The prototypical example of this is a collection class, like List<String> foo. In this case, the List interface is generic and lets you specify the type of object you expect to find within it, in this case String.

Generics, unfortunately, were added after Java's initial release, and they bear the marks of it. Unlike languages like C++, Java generics are largely syntactic sugar, meant to replace things like:

String someString = (String)aListIKnowHasStrings.get(0);

...with this:

String someString = aListDeclaredWithStrings.get(0);

However, under the covers, a List only ever really knows it contains Objects, and the second form just transparently shims in a (String) cast at runtime. That's why you can do something like this:

((List<Object>)(List<?>)aListDeclaredWithStrings).add(new NotAStringObject());

That line will not only compile, but it will execute without issue at runtime. It's only later, when you try to extract the value to a String variable, that you'll hit a ClassCastException.

Some generic information is retained at runtime, depending on how it's used, but for the most part it's best to think of it as just a syntax nicety. This behavior is endless trouble, but something we have to live with.

Garbage Collection Is Automatic, But Resource Management Isn't Necessarily

This is one of the main things that bites Domino developers as they learn about Java. One of the early things they learn when switching from LotusScript to Java is that now you have to worry about the .recycle() method on your objects, or else you'll have trouble. This leads to two misapprehensions: that Java in general requires "recycling" for every object, and that recycling with Domino objects is about memory in the same way that a Java OutOfMemoryError is.

Unfortunately, the reason that recycle() exists at all requires delving into some nitty-gritty aspects of the Java environment, but I first want to reinforce that Java uses automatic garbage collection at all times to watch for and delete objects that are no longer used. That "no longer used" bit glosses over a bit, but take this as an example:

public void foo() {
  String a = "hello";
  String b = " there";
  return a + b;
}
public void bar() {
  String message = foo();
  System.out.println(message);
}

There are three objects in action here, but, by the time the code reaches the System.out.println line, a and b are no longer used and will be slated for automatic garbage collection. You as a programmer do not need to worry about them.

The lotus.domino objects, though, are trouble. I think it's best to not think of recycle() in terms of "memory" but instead think of the objects as "open resources", in the same way that you might open a network connection or a stream to a file on the filesystem. Unlike object memory, network resources are not necessarily automatically closed by Java - there are some affordances with syntax and the concept of "finalizers", but, in general, the responsibility for closing a resource lies with the programmer.

There are a few grab-bag notes to do with these objects:

  • Different lotus.domino objects refer to different kind of backing resources, which is why problems will sometimes manifest as complaints about memory (when they refer primarily to C-side structures in Domino's native memory, separate from Java) and sometimes as complaints about handles (database and document references, generally)
  • Recycling a "parent" object recycles all of its children, but the relationship is not always clear. Importantly, DateTime objects are children of the ancestor Session, even when you retrieve them from a Document, and so they can linger for a long time
  • Agents and XPages both mitigate and conceal the need for recycling by automatically closing the auto-generated Session(s) at the end of the agent execution or page request. In practice, you only really need to worry about recycling if you're, for example, looping over a large view

I may make another one of these posts in the future, and hopefully this goes a little way to clearing up some common misconceptions.

How Do You Solve a Problem Like XPages?

  • Nov 2, 2018

(Fair warning: this is a meandering one and I'm basically a wet blanket the whole way through)

Last week, HCL held the third of their Twitter-based developer Q&As, with this one focusing on XPages and Designer. The majority of the questions (including, admittedly, all of mine) were along the lines of either "can we get some improvements in the Java/XPages stack?" or "is XPages still supported?". The answer to the latter from HCL, as it would have to be, is that XPages is still alive and "fully supported".

I don't doubt at all that XPages is supported in the sense that it has been for the last couple years: if you as a customer encounter a bug in the platform, support will take your call and will most likely either have a workaround or will get a fix in. This will no doubt happen naturally as time marches on, primarily when a new version of a browser breaks something in the version of Dojo that XPages uses (currently, I believe, a couple notches down the list as 1.9.7). So, that's good, and is better than the worst-case scenario.

It's not great, though. Version 10 had essentially no changes for XPages - that makes sense with its "stanch the bleeding" market goal, but it continues the history of very little progress. Outside of the forced-for-security-reasons bump to Java 8 in 9.0.1FP8 (which is admittedly nice), the last major addition to XPages was the Bluemix tooling via the Extension Library, which, as far as I can tell, only exists because of the way funding politics works inside IBM. Before that, I'd mark it as the promotion of Bootstrap renderers from Bootstrap4XPages to the main ExtLib just shy of four years ago. Before that, it was... I guess adding the ExtLib to the main product in 9.0, which sort of counts. Before that, it was pretty much the introduction of the extension points and ExtLib in the 8.5.2 era. And sessionAsSigner, I suppose.

Not to belabor the point too much further, XPages developers are in an uncomfortable spot. For a decade or so now, XPages was the clear "this is what Domino developers should be doing" choice, especially in the face of many Domino shops wanting to at the very least get rid of the Notes client. However, though it was modern enough when it was introduced, the stack has missed the boat on a lot of evolution, both in simple terms of its Java EE common ancestor improving on its own and in larger terms of changes in the web development world.

Had XPages continued under real active development, it could have gradually improved to fit more comfortably in a world of transpiled JavaScript and CSS pre-processors, strict focus on REST APIs, and reactive and streaming APIs.

But...

But even in that "active development" alternate universe, the path would have been awkward. Though XPages has the capacity to be well-structured, Designer and IBM provided no help on this: no model framework, no internal routing, not even an indication that writing Java code was possible in an XPages app until several versions in. The "MVC" aspects of its JSF components were beaten down to match the expectations of an NSF container and of LotusScript developers. There's very good reason for that - very few Domino shops were likely to send developers to computer-science boot camps to learn about proper Java EE structure, and the only way XPages was going to work at all was if it started out as "forms but with partial refresh".

And, after that first unstructured version, it largely fell prey to the usual problems of enterprise software: IBM isn't in the business of doing things their customers aren't asking for, and the community members asking for, say, a faces-config.xml editor weren't backing those requests up with big licensing checks. I get that, too, I really do. If you spend too much time hypothesizing about and implementing what customers might want, you run the risk of throwing your money down a bottomless hole while your actual customers suffer and leave. So IBM generally swings hard in the other direction, and it's understandable if unfortunate in cases like this, especially when what your customers are strictly asking for is stasis.

So What Now?

Our current situation now is that HCL plans to have a roadmap in Q1 2019, so I suppose we'll wait and see what they say again. I've been mulling over what I think should be the way forward for XPages and its users, and I don't see any clear good solution.

The option that immediately springs to mind is "add more features to it": HTTP/2 and WebSockets, newer renderers, an IDE that encourages good development practices, better way to bring in third-party libraries, cleaner JavaScript, and so forth. But what makes this questionable for me is the sheer amount of work, especially since they'd have to start by digging out of an immense amount of technical debt. And this would be very specialized work indeed - the XPages stack is complicated. I'd wager that just the part of the stack that handles ferrying attachments between the browser and a document is more complex than most entire Domino applications by a good margin. While some of the improvements would be handled via the core Domino server team (part of the HTTP upgrades, namely), HCL would likely have to acquire a team of Java developers and have them learn a giant stack of OSGi, JSF, Domino APIs, a couple decades of legacy decisions before even getting going. Possible? Certainly. It's just kind of a hard sell.

Another possibility would be open-sourcing the stack and either maintaining it as a project themselves, giving it to OpenNTF, or handing it to an organization like Eclipse, which now holds the reins of Java/Jakarta EE generally. I think that open-sourcing it would have immediate benefits to XPages developers regardless of what else they do with it, but I'm skeptical of how much of a life it would have if it was converted to a community-run project. As it stands, I can only think of a handful of people who a) are aware of XPages and b) would be capable of contributing to its core code. That's not a problem if you are employing people to work on it full-time, but I don't think it has a large enough base to exist on a "side project" basis. Attracting new blood would be an uphill battle: even projects like Andmore that have a clear purpose for existing and a contingent of people desperate to keep their workflow can wither on the vine immediately. Outside of Domino developers, XPages would be viewed as "JSF, except old and restricted to a platform you thought died in the 90s".

The other main thing to do with the stack that doesn't involve killing it, I think, would be pushing it to a state that focuses on REST APIs instead of handling the UI itself, which is something that some XPages devs have been doing already, either by switching to plugins serving up JAX-RS services or via in-NSF controls. This is something that IBM kind-of-sort-of said they were aiming for a couple years ago when they put forward SmartNSF as a good option, but the effective demise of the Extension Library cut off its path into the core, at least for now. Overall, I think that this would make sense. The experience of writing REST services inside an NSF (or in an OSGi plugin) is significantly worse than JAX-RS in a normal Java EE or Spring app, but it provides a clear path for existing NSFs and code to continue being used - more or less - without having to set up a second app server. It may not be the preferred solution for Java in Domino, but it would have the advantage of improving the platform without having to worry anymore about how, say, all the core renderers use tables for layout.

For Developers

For XPages developers, my long-proffered advice remains the same: learn things that aren't XPages. Whether that means diving head-first into Java EE or Spring, focusing on client-JS development, learning Node, or any other option, you'll likely be well-served by it.

It's possible that you could continue to do XPages work or go back to the Notes client indefinitely - XPages will get patches if nothing else, and Nomad breathes undeserved life into LotusScript - but there's no guarantee there. There's no guarantee anywhere else, I suppose, but staying too tied to Domino-specific technology keeps you at immediate risk of a from-above directive to switch away.

In short: do not expect a cavalry to ride to your aid.

AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10

  • Oct 19, 2018

Since 9.0.1 FP 10, and including V10 because it's largely identical for this purpose, I've encountered and seen others encountering a couple strange problems with compiling XPages projects. This is a perfect opporunity for me to spin a tale about the undergirding frameworks, but I'll start out with the immediate symptoms and their fixes.

The Symptoms

There are three broad categories of problems I've seen:

  • "AbstractCompiledPage cannot be resolved to a type"
  • Missing third-party XPages libraries, such as ODA, resulting in messages like "The import org.openntf cannot be resolved"
  • Complaints about MANIFEST.MF, like "MANIFEST.MF has no main section" and others

The first two are usually directly related and have the same fix, while the second can also be caused by some other sources, and the last one is entirely distinct.

Fix #1: The Target Platform

For the first two are based on problems in the active Target Platform, namely one or both of the standard platform components go missing. The upshot is that you want your Target Platform preferences to look something like this:

Working Target Platform

There should be a selected platform (the name doesn't matter, but "Running Platform" is the default name) with entries at least for ${eclipse_home} and for a directory inside your Notes data dir, here C:\Notes\Data\workspace\applications\eclipse. If they're missing, modify an existing platform or create a new one and add an "Installation"-type entry for ${eclipse_home} and a "Directory"-type one for the eclipse directory within your data dir.

Fix #2: Broken Plugins, Particularly ODA

Though V10 didn't change much when it comes to XPages, there are a few small differences. One in particular bit ODA: we had a dependency on the com.ibm.domino.commons plugin, which was in the standard Notes environment previously but is not as of V10 (though it's still present on the server). We fixed that one in the V10 release, and so you should update your ODA version if you hit this trouble. I don't think I've seen other plugins with this issue in the V10 transition, but it's a possibility if Fix #1 doesn't do it.

Fix #3: MANIFEST.MF

This one barely qualifies as a "fix", but it worked for me: if you see Designer complaining about MANIFEST.MF, you can usually beat it into submission by cleaning/rebuilding the project in question. The trouble is that Designer is, for some reason, skipping a step of the XPages compilation process, and cleaning usually kicks it into gear.

I've also seen others have success by deleting the error entry in the Errors view (which is actually a thing that you can do) and never seeing it again. I suspect that the real fix here is the same as above: during the next build, Designer creates the file properly and it goes away on its own.

The Causes

So what are the sources of these problems? The root reason is that Designer is a sprawling mountain of code, built on ancient frameworks and maintained by a diminished development team, but the immediate causes have to do with OSGi.

The first type of trouble - the target platform - most likely has to do with a change in the way Eclipse manages target platforms (look at the same prefs screen in 9.0.1 stock and you'll see it's quite different), and I suspect that there's a bug in the code that migrates between the two formats, possibly due to the dramatic age difference in the underlying Eclipse versions.

The second type of trouble - the MANIFEST.MF - is due to a behind-the-scenes switch in how Designer (and maybe the server) handles dependencies in XPages projects.

Target Platforms

The mechanism that OSGi projects - such as XPages applications - use for determining their dependencies at build time is the notion of a "Target Platform". The "target" refers to the notion that this is the platform that is expected to be available at runtime for what you're building - loosely equivalent to a basic Java classpath. An OSGi project is checked against this Target Platform to determine which classes are available based on their bundle names and versions.

This is distinct from the related concept of a "Running Platform". Designer, being based on Eclipse, is itself built on and runs using OSGi. Internally, it uses the same mechanisms that an XPages application does to determine what plugins it knows about and what services those plugins provide.

This distinction has historically been hidden from XPages developers due to the way the default Target Platform is set up, pointing at the same Running Platform it's using. So Designer itself has the core XPages plugins running, and it also exposes them to XPages applications as the Target. Similarly, the way we install XPages Libraries like ODA is to install them outright into the Designer Running Platform. This allows Designer to know about the library service provided, which it uses to populate the list of available plugins in the Xsp Properties editors.

However, as our trouble demonstrates, they're not inherently the same thing. In standalone OSGi development in Eclipse, it's often useful to have a Target Platform distinct from the Running Platform - such as the XPages environment for plugins - to ensure that you only depend on plugins that will be available at runtime. But when the two diverge in Designer, you end up with situations like this, where Designer-the-application knows about the XPages runtime and plugins and constructs an XPages project and translates XSP to Java using them, but then the compilation process with its empty Target Platform has no idea how to actually compile the generated code.

MANIFEST.MF

I've mentioned that an OSGi project "determines its dependencies" out of the Target Platform, but didn't mention the way it does that. The specific mechanism has changed over time (which is the source of our trouble), but the idea is that, in addition to the Java classes and resources, an OSGi bundle (or plugin) has a file that declares the names of the plugins it needs, including potentially a version range. So, for example, a plugin might say "I need org.apache.httpcomponents.httpclient at least version 4.5, but not 5.0 or higher". The compiler uses the Target Platform to find a matching plugin to compile the code, and the runtime environment (Domino in our case) does the same with its Running Platform when loading.

(Side note: you can also specify Java packages to include from any plugin instead of specific plugin names, but Designer does not do that, so it's not important for this purpose.)

(Other side note: this distinction comes, I believe, from Eclipse's switch from its own mechanism to OSGi in its 3.0 release, but I use "OSGi" to cover the general concept here.)

The old way to do this was in a file called "plugin.xml". If you look inside any XPages application in Package Explorer, you'll see this file and the contents will look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>
<plugin class="plugin.Activator"
  id="Galatea2dVCC_2fIKSG_dev5csyncagent_nsf" name="Domino Designer"
  provider="TODO" version="1.0.0">
  <requires>
    <!--AUTOGEN-START-BUILDER: Automatically generated by null. Do not modify.-->
    <import plugin="org.eclipse.ui"/>
    <import plugin="org.eclipse.core.runtime"/>
    <import optional="true" plugin="com.ibm.commons"/>
    <import optional="true" plugin="com.ibm.commons.xml"/>
    <import optional="true" plugin="com.ibm.commons.vfs"/>
    <import optional="true" plugin="com.ibm.jscript"/>
    <import optional="true" plugin="com.ibm.designer.runtime.directory"/>
    <import optional="true" plugin="com.ibm.designer.runtime"/>
    <import optional="true" plugin="com.ibm.xsp.core"/>
    <import optional="true" plugin="com.ibm.xsp.extsn"/>
    <import optional="true" plugin="com.ibm.xsp.designer"/>
    <import optional="true" plugin="com.ibm.xsp.domino"/>
    <import optional="true" plugin="com.ibm.notes.java.api"/>
    <import optional="true" plugin="com.ibm.xsp.rcp"/>
    <import optional="true" plugin="org.openntf.domino.xsp"/>
    <!--AUTOGEN-END-BUILDER: End of automatically generated section-->
  </requires>
</plugin>

You can see it here declaring a name for the pseudo-plugin that "is" the XPages application (oddly, "Domino Designer"), a couple other metadata bits, and, most importantly, the list of required plugins. This is the list that Designer historically (and maybe still; it's not clear) uses to populate the "Plug-in Dependencies" section in the Package Explorer view. It trawls through the Target Platform, finds a matching version of each of the named plugins (the latest version, since these have no specified ranges), adds it to the list, and recursively does the same for any re-exported dependencies of those plugins. "Re-exported" isn't exposed here as a concept, but it is a distinction in normal OSGi plugins.

Designer derives its starting points here from implicit required libraries in XPages applications (all those "org.eclipse" and "com.ibm" ones above) as well as through the special mechanism of XspLibrary extension contributions from plugins installed in the Running Platform. This is why a plugin like ODA has to be installed in Designer itself: in the runtime, it asks its plugins if they have any XspLibrary classes and uses those to determine the third-party plugin to load. Here, ODA declares that its library needs org.openntf.domino.xsp, so Designer adds that and its re-exported dependencies to the Plug-in Dependencies group.

With its switch to OSGi in the 3.x series circa 2005, most of the functionality of plugin.xml moved to a file called "META-INF/MANIFEST.MF". This starkly-named file is a standard part of Java, and OSGi extends it to include bundle/plugin metadata and dependency declarations. As of 9.0.1 FP10, Designer also generates one of these (or is supposed to) when assembling the XPages project. For the same project, it looks like this:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Domino Designer
Bundle-SymbolicName: Galatea2dVCC_2fIKSG_dev5csyncagent_nsf;singleton:=true
Bundle-Version: 1.0.0
Bundle-Vendor: TODO
Require-Bundle: org.eclipse.ui,
  org.eclipse.core.runtime,
  com.ibm.commons,
  com.ibm.commons.xml,
  com.ibm.commons.vfs,
  com.ibm.jscript,
  com.ibm.designer.runtime.directory,
  com.ibm.designer.runtime,
  com.ibm.xsp.core,
  com.ibm.xsp.extsn,
  com.ibm.xsp.designer,
  com.ibm.xsp.domino,
  com.ibm.notes.java.api,
  com.ibm.xsp.rcp,
  org.openntf.domino.xsp
Eclipse-LazyStart: false

You can see much of the same information (though oddly not the Activator class) here, switched to the new format. This matches what you'll work with in normal OSGi plugins. For Eclipse/Equinox-targeted plugins, like XPages libraries, plugin.xml still exists, but it's reduced to just declaring extension points and contributions, and no longer includes dependency or name information.

Eclipse had moved to full OSGi by the time of Designer's pre-FP10 basis (2008's 3.4 Ganymede), but XPages's history goes back further, so I guess that the old-style Eclipse plugin.xml route is a relic of that. For a good while, Eclipse worked with the older-style plugins without batting an eye. FP10 brought a move to 2016's Eclipse 4.6 Neon, though, and I'm guessing that Eclipse dropped the backwards compatibility somewhere in the intervening eight years, so the XPages build process had to be adapted to generate both the older plugin.xml files for backwards compatibility as well as the newer MANIFEST.MF files.

I can't tell what the cause is, but, sometimes, Designer fails to populate the contents of this file. It might have something to do with the order of the builders in the internal Eclipse project file or some inner exception that manifests as an incomplete build. Regardless, doing a project clean and usually jogs Designer into doing its job.

Conclusion

The mix of layering a virtual Eclipse project over an NSF, the intricacies of OSGi, and IBM's general desire to insulate XPages developers from the black magic behind the scenes leads to any number of opportunities for bugs like these to crop up. Honestly, it's impressive that the whole things holds together as well as it does. Even though it doesn't seem like it to look at the user-visible changes, the framework changes in FP10 were massive, and it's not at all surprising that things like this would crop up. It's just a little unfortunate that the fixes are in no way obvious unless you've been stewing in this stuff for years.

Domino 10 for Developers

  • Oct 9, 2018

So Domino 10 is upon us, marking the first time in a good while that Domino has had an honest-to-goodness version bump.

More than anything, I think V10 is about that sort of mark. Its primary role in the world is to state "Domino isn't dead" - not exactly coming from a position of strength for the platform, but it's the critical message that HCL has to sell if they're going to be viewed as anything but coroners.

Still, in addition to merely existing, V10 brings some changes that will help developers, particularly those - sadly - maintaining large legacy applications.

DQL

The addition that will have the largest immediate impact on developing codebases is, I think, DQL. I went into a bit of detail on this before and I think that that post is largely accurate, but the general gist of it is that DQL can be thought of as "database.search(...) but good", bringing practical arbitrary queries of non-FT data to Domino.

In its current form, it feels like a long-back-burnered passion project that's implemented in an effective way, bringing some of the benefits of arbitrary queries in SQL and new-era NoSQL databases without having to rewrite NIF or NSF storage.

UpdateAs Karsten Lehmann kindly pointed out, DQL is slated for addition to the LS/Java classes in 10.0.1 at an as-yet-unspecified time.

HTTP Methods in LotusScript

I just let out a heavy sigh after writing that header, but I get why they added these. A lot of Domino developers never left the desiccated-but-comforting embrace of LotusScript or are employed primarily to maintain Notes client apps, where using Java is possible but involves jumping over hurdles.

Network operations have been possible for a long time via OLE (on Windows) or LS2J, and I'm sure I'm not the only one who's had a "Network" LS script library sitting around for over a decade, but having baked-in methods is preferable. Moreover, neither of those mechanisms would work on iOS without a lot of additional work.

This is being billed as enabling all sorts of integrations, which I suppose is strictly true in that it's a bit easier to call HTTP methods in old code now. In practice, I think it will be mostly helpful for the little one-off situations where you have to call some web service to integrate with a product tracking app or the like.

iPad Notes Client

This definitely seems like another back-burner project that was brought to the fore in the HCL transition. They had iOS references in the Mac 64-bit C SDK years ago, and it only makes sense, since the existence of the Mac port at all meant the job was (sort of) half done. It's not out today, but it's logically tied to V10, and they've been expanding a beta over the summer.

Like the additions to LotusScript, it makes sense. I can't imagine that running existing Notes apps on an iPad will be a good experience, but it should be a cheap one, and it'll probably be good enough for at least some cases. They've intimated that there will be affordances in app design to improve the experience specifically for this client, though I don't envy the engineers who have to go in and implement those.

Node.js Support

Dubbed the "App Dev Pack", Node support will be coming in an Upgrade-Pack-like additional download, in the form of a Domino server addon to add a gRPC server combined with a domino-db Node module that I gather is designed to be familiar for Node+MongoDB stack users.

When this intention was first announced, I think that a lot of Domino developers figured it would be like XPages: a new design element or two added to the NSF, plus another runtime crammed into Domino's aging HTTP stack. The other big potential option was essentially a codification of the ExtLib DAS REST services into a wrapper package to be used in standalone Node apps.

The App Dev Pack is more the latter than the former, but the use of gRPC should make it more performant and flexible than just wrapping the existing HTTP services. I'll be curious to see how this shakes out in practice. XPages has been with us for a decade, but it still only captured a slice of the Domino development market, and it carried the advantage of being bundled right into the stack. Node is a very different beast, entirely unlike traditional Domino development, and I'm not sure how many existing Domino developers will make the transition. Ostensibly, one of the main benefits is to also attract new blood, which - well, maybe.

Having this is much better than not, and the notion of having a new RPC connection that doesn't have the local runtime requirements of NRPC is tantalizing.

Overall

Overall, this release definitely feels like a very pragmatic release. Just by virtue of its existence, it covers the base of "Domino isn't dead" in a way that's much better than the older mealy-mouthed messaging of "well, we don't have specific plans to cancel it". Additionally, though, the fact that most of the developer-facing improvements are for "old world" design elements is an acknowledgement that XPages didn't capture the Domino development world (and, probably, that HCL didn't hire the XPages team). The prospect of the community crawling back into the LotusScript cradle isn't great, but there's no avoiding the fact that there are a great many developers who never had a reason to do anything different. Not many cost-cutting IT departments let their developers re-learn their entire skillset when other departments are just asking for a new button on a form.

In an alternate universe, this would have made for a fine "Domino 9.5" release, but the wringer that the 9.0.1 era put us through demanded a full major version bump. I'll be curious to see how Domino 11 and so on shape up. If the "not dead" push works and it turns Domino's fortunes in the market around at all, it would give HCL room to turn it into a real platform again. That's a big "if", since it's a lot easier to get existing Domino developers excited than it is to get IT purchasers to sign the licensing checks, but time will tell.

SNTT: Designer Target Platform

  • Sep 27, 2018

When working on an XPages project, my development environment is generally set up like so:

  1. Eclipse running in the Mac environment editing Maven-structured plugins and ODPs
  2. Domino running in a local Windows VM, set to run plugins out of the Mac Eclipse workspace
  3. Designer running in the same Windows VM to compile the ODP and work with legacy elements
  4. Firefox DE running in the Mac environment

This has proven to be a fairly comfortable setup, particularly ever since Serdar added the ability to use a remote workspace to the XPages SDK. When I make a change in the plugin, I only need to restart HTTP on Domino and all is well.

But there's been one major annoyance: if I change a method or class that I use in in-NSF Java, I have to install the updated plugins and restart Designer, which is a big distruption to my flow. Even if I was using Eclipse on Windows with the old method of running Designer out of Eclipse (if that even still works), it'd still involve a restart.

Today, though, I realized that I've been doing it the dumb way all along. The reason is that I've been conflating the two aspects of an XPages Library plugin when it comes to Designer.

The first aspect is to tell Designer-the-IDE that an XPages Library named, say, org.example.XspLibrary is available for use in XPages applications, and that it's associated with the plugin org.example.plugin. This is provided by the plugin being installed into the running Designer environment, and absolutely needs a restart when it changes. Designer uses this information to compose the OSGi pseudo-project that makes up the NSF in the workspace (by adding it to the plugin.xml historically and the MANIFEST.MF in 9.0.1 FP10+).

The second aspect is that the plugin is then available in the Target Platform. The Target Platform is an OSGi-ism basically meaning the OSGi runtime's view of the world - in Eclipse, it means the plugins that Eclipse knows about and which can be referenced by projects, such as the NSF. This doesn't have to be related at all to the running platform - and, in fact, it's common in OSGi development to have an entirely-distinct target platform to properly represent a server or other runtime environment.

The reason that these aspects are comingled is that, by default, Designer is configured to have a default Target Platform that's based on its runtime environment and any installed plugins. We, as XPages developers, thus generally don't have to think too much about it. "Install the plugin in Designer" is the single step that handles registering the library and making its classes available.

However, the platform is just a setting in the preferences, and it usually looks something like this:

Default Designer Target Platform

Since it's just a setting, and one that is commonly modified in Eclipse, there's nothing stopping us from modifying it. That's where I realized I was doing it the inefficient way all along. So I modified the active platform to point to the target/site folder of my update site Maven project:

Modified Designer Target Platform

With that change, Designer will see both the installed version as well as the latest results of the Maven build. So now, I can do a Maven build and then, in this dialog, click "Reload..." to get Designer to notice the changes. Once I do, voilà - the new methods/classes show up and I don't have to restart and lose my workspace state.

App Dev After CollabSphere 2018

  • Jul 29, 2018

In recent years, MWLUG/CollabSphere has tended to be a good time to get a lay of the land for what IBM - and now HCL - intends for their app dev strategy. Recent Connects weren’t too heavy on announcements of major import for Domino developers, and any news to come out tends to do so in the months leading up to summer.

This year, we’ve had time to digest the implications of the HCL transfer, get a feel for how they intend to handle the product, and generally get a good bead on their app-dev vision. What they’ve said so far this year is clear: LotusScript for old apps on mobile platforms and Node.js for new development (or new developers). As far as XPages, I believe that the most time that it got at the conference was in my session, which was about what to do after XPages.

LotusScript

Though I’ve certainly not hidden how painful the prospect of enhancements to LotusScript is to me, I have to admit that adding a few capabilities for REST data service access makes strategic sense for the platform. Though XPages made a significant mark on Domino app dev, it never pushed aside the classic style, and every move that IBM made for app modernization since then seemed to exist exclusively in the span of the sentence announcing it.

So HCL announced early this year that they planned to port the classic Notes client first to iOS and then later to Android and WebGL+WebAssembly. Adding any kind of Java to this plan - XPages, LS2J, etc. - would present some technical hurdles, and so it makes workload sense to focus on the languages that have runtimes in the C core.

Apps run this way won’t be good, but there’s some logic to the tack of targeting customers for whom “modernization” only really means “we want our same old apps to run offline on new OSes”. Their plan to run on phones also necessitates some more-dramatic changes to the tooling, so it’s possible that they have larger changes in mind - or at least we’ll see a return of the “hide on mobile” checkboxes in Designer.

Node.js

The big HCL push for Node.js seems to me to be a way to get a lot of bang for the buck: by positioning it as the new way to write apps, they’re both (potentially) making Domino more appealing to those not already on the platform and guiding existing developers to a platform for which IBM and HCL are not responsible. Though the domino-db driver is no small technical feat - and it looks like they’ve done a good job making it both fast and native-feeling in Node - it’s a much, much smaller footprint than XPages, which put IBM on the hook for maintaining an entire app-dev stack and UI toolkit with limited outside assistance.

I do think that it’s smart to write a Node.js DB driver - even if it doesn’t bring in an influx of new blood, it provides a legitimate app-dev story and Node is a top-notch platform. The gRPC stack also provides an entryway for future hooks and development without the assumptions of NRPC.

Java

Java development on Domino is in a weird place. Domino 10 doesn’t have anything directly for XPages/OSGi developers, though we’ll get access to DGQF via the Database class. I’ve heard whispers that they’re starting to plan more for Domino 11, but that’s largely conjecture at this point. Certainly, HCL has made it clear that their heart isn’t in it, and honestly I get why. Since XPages has been in essentially maintenance mode since 9.0.1 or earlier, it’s aged itself out of contention for modern app dev. It wouldn’t be impossible to drag it forward to something respectable, but then they’d still have another development environment exclusive to Domino to maintain.

I’m not sure what the best thing to do with the stack is. Though XPages didn’t bring all Domino developers to it, it did bring a significant chunk, and a lot of people have spent upwards of a decade of their life with the toolkit. For my part, I think it makes a lot of sense to move to “normal” Java/Jakarta EE development, which provides the possibility of salvaging Java-side code, though it leaves XSP and SSJS in the lurch. It’s hard to make a good financial case for either significantly upgrading the platform or at least undoing the tight coupling with the Domino server that it accrued over the years, though I’ll admit it’s sort of fun to think about.

DGQF and DQL as I Understand Them

  • Jul 26, 2018

At CollabSphere this year, the big information coming from HCL was detail about the Domino General Query Facility (DGQF) and its associated language, Domino Query Language (DQL). They originally announced this a few weeks ago, but it was good to have had some time to let the dust settle and to see the specifics.

Because it was discussed alongside the domino-db Node.js package and because it's one of the first real new ways we'll interact with data in a Domino DB in a while, it's a bit difficult to identify just what it is and what it is not. Here's how I understood it:

What DGQF Is

DGQF is, at least conceptually, a "meta" layer on top of the existing NIF indexing facility. It doesn't provide a core change to the actual storage of documents, but instead treats existing view indexes as (roughly) analagous to both SQL table indexes and SQL views. It trawls through the design elements of a database to analyze their selection formulae and columns to use applicable ones as implicit indexes and also to allow access to arbtirary collections within queries.

Implicit Indexes

Other than the design collection and the "optimize document table" option in a DB, an NSF doesn't really have much in the way of indexing note contents by default. So, if you have a query asking for all documents where FirstName is Bob, a program has no choice but to look through every document for that key/value match. If, however, you create a view that has a column showing the FirstName field, you now have a much-faster index you can use. It's this sort of view that the DGQF picks up on implicitly, using them to accelerate queries: views showing all documents with either a default sort or "click to sort" column showing explicitly a field (and not a formula).

Access to Arbitrary Collection Data

For those qualifying views plus others, you can reference a view by name or alias to compare to a column value by programmatic name (often either the field name for simple columns or something like $4 by default for formulas).

"In" clauses

Additionally, you can use view (and folder, I think) names to refine queries for documents that are in one or more of these collections, equivalent to an "in" subquery or view reference in SQL

What DQL Is

In short, DQL is the human-readable query language used to access DGQF. It's reasonably SQL-like (though it is not SQL) and tends to look like FirstName='Bob' and in all ('Managers', 'Active Users'). This is the language you will use, and so "DGQF" and "DQL" will generally refer to the same thing in practice.

In practice, this is implemented as a new method on the Database class in each high-level language supported by Domino, plus a Node-styled variant in domino-db.

What DGQF and DQL Are Not

Since DGQF sits on top of NIF (and probably the FT index eventually), it's not a core change to data storage. Eventually, the same abilities and limits of Domino remain as they are with respect to this.

Additionally, DQL is, I believe, a query language only: it does not provide a mechanism for creating, modifying, or deleting existing documents. Instead, it is essentially a super-powered and much-smarter version of database.search(…): you can use it to find documents and the processing of them is up to your program.

That last point was a bit muddied by its pairing with the domino-db Node.js package: the Node.js package provides bulk operations that are paired with DQL queries, but that is a function of that library specifically, not DQL or DGQF.

Why It's Cool

Though it's not a reworking of the core NSF, what DGQF does do is abstract away a lot of the manual looping and lookups that we've always had to do, and it allows the system to optimize and do things more efficiently than when written out procedurally. So, while there's theoretically nothing that DGQF does that we couldn't do before, it allows us to do those things with far, far less code and with automatic optimization.

This brings Domino something that SQL servers have enjoyed for a long time. With a SQL statement, you can analyze the trouble spots of a slow-running query and add indexes to improve the speed, with the tooling helping to explain what's going on. DGQF+DQL brings this along for the ride: when you execute a DQL query, you have the option to dump out this "explain" output to see what specifically the facility did, which views it used, and how long each step took. So, if you have a long-running query, you can look to see if you can add an "index" view to automatically speed it up without having to change your code. And, since the language is an abstraction over the task of querying and not the sort of "burned in" process of a normal getNextDocument loop, it can be optimized and short-circuited by the underlying system without the developer having to know the decades of built-up knowledge of how to efficiently search a DB.

All in all, this is a very welcome addition to the server, and it certainly should improve a lot of common tasks.

Reforming the Blog in Darwino, Part 4

  • Jul 20, 2018

Last time, I went over my switch in tack for how I'm making the new version of my blog, and my overall focus on picking an interesting stack of JEE technologies. In this post, I'm going to start diving into the implementation of the UI, though I think that it will make sense to dedicate two posts to it.

The biggest decision I made with the UI side of this app is that I didn't want to make a client-side JS app. There's a reason they're so ascendant, and I find development with React or Stencil pretty enjoyable, but I wanted to go a different route here for a few reasons:

  • For a blog, a CSJS app is wildly overkill, and, in fact, would require extra work to fulfull one of the basic requirements of a blog, which is being web-crawler friendly.
  • I want to see how svelte I can make the client payload.
  • Skipping a JS framework (and a CSS one) is a great way to brush up on what plain HTML and CSS are capable of nowadays.
  • Unlike a typical Darwino app, my only target is a full-on Java web server, so I'm not held back on the Java side by the capabilities, say, of Dalvik on Android 4.
  • Part of me misses the simplicity of my early PHP days, albeit not the language.

The Java Side

I decided to go with the MVC 1.0 draft spec because it lets me write extremely focused code. Here is the controller for the home page:

package controller;

import javax.inject.Inject;
import javax.mvc.Models;
import javax.mvc.annotation.Controller;
import javax.ws.rs.GET;
import javax.ws.rs.Path;

import model.PostRepository;

@Path("/")
@Controller
public class HomeController {
	@Inject
	Models models;
	
	@Inject
	PostRepository posts;
	
	@GET
	public String get() {
		models.put("posts", posts.homeList());
		
		return "home.jsp";
	}
}

Naturally, there's a lot of magic going on behind the scenes - there's tons of heavy lifting going on here by JAX-RS, MVC, CDI, JNoSQL, and Darwino - but that's the point. All the other components are off doing their jobs in their areas, while the code that provides the UI doesn't have to care about the specifics.

Things can get more complicated on the pages that actually have some functionality to them, but the code remains pleasantly focused. Take the handler for deleting posts:

@DELETE
@Path("{postId}")
@RolesAllowed("admin")
public String delete(@PathParam("postId") String postId) {
	Post post = posts.findByPostId(postId).orElseThrow(() -> new IllegalArgumentException("Unable to find post matching ID " + postId));
	posts.deleteById(post.getId());
	return "redirect:posts";
}

This adds another level of magic in the form of javax.security.annotation.RolesAllowed, but it's more of the good kind: even with no knowledge of the underlying frameworks, it's pretty clear what every bit of code is doing here. It's a refreshing bit of that Rails simplicity, just more compile-type-safe and much uglier.

Even beyond the minimal code is the cleanliness that this brings to the structure of the application: other than the img, css, and js paths, all of the routing within the application is done care of JAX-RS and MVC. It's not beholden to the folder structure in the project or to a Domino-style implicit app router.

JSP

JSP has been the prototypical Java HTML language for about 20 years, and it's had a rough upbringing. The early versions committed the PHP/XPages sin of encouraging you to put business logic right on the page, and it even still has the typical Java problem that it's tricky to find advice about using it that uses technologies added since 2005.

Still, when used properly, it can be a nice, clean templating language. Again from the main home page:

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<t:layout>
	<c:forEach items="${posts}" var="post">
		<t:post value="${post}"/>
	</c:forEach>
</t:layout>

For an XPages developer, this is extremely comfortable. It's also very refreshingly elemental: there's no server-side persistence of the page, so everything is "load-time bound" and, with just HTML tags and core JSTL tags, nothing ends up on the page that you don't explicitly put there.

Ozark, the MVC implementation, also supports using JSF "Facelets" for the view portion, but JSP suits the task just fine.

HTML + CSS

It'd been far too long since I last really sat down and looked at what baseline HTML and CSS are like - in particular, I'd watched the rise of CSS Flexbox and Grid from afar, and never gave them a shot. Using components that generate their own HTML and pre-existing CSS frameworks to target with class names is all well and good, but it does leave you a bit disconnected from the fundamentals.

So, for this iteration, I tossed aside the very-nice Bootstrap framework I've been using, dusted off one of my old hand-built ones, and got to translating it into CSS Grid. This cut down on the page size enormously: I had already echewed Dojo by not using XPages, but this now also meant that I could ditch the core bootstrap.css, jQuery, and any jQuery plugins.

Beyond CSS Grid, have you seen how nice HTML forms are nowadays? Just looking at this post reveals how much is built in in the way of validation and different input types, even before you write a line of JavaScript.

Turbolinks

Having such a trimmed-down UI means that pages already load extremely quickly, but I figured this was also a perfect chance to try out a bit of clever tech from the team at Basecamp: Turbolinks. Turbolinks is a JS file that you bring into your app which then transparently takes over your in-app links to minimize the amount of rendering you have to do. Since the surrounding boilerplate of the app usually doesn't change between requests, it can figure out the "diff" between old and new and just replace the body. It's essentially like partial refreshes without the server knowing anything about it.

It's still technically inefficient to have the server render and transfer surrounding page elements that are just going to be discarded anyway. But, on the other hand, skipping that means that I don't have to write JavaScript handlers myself, use a full CSJS app framework, or keep state on the server side. The server just keeps doing what it does with a fully context-less request and the browser sorts it out. Basecamp's programmers are masters at the targeted deployment of kludges for maximum benefit.


In the next (final?) post in the series, I'll finish up with my "low-JS" experience and other lessons learned from this project.

Reforming the Blog in Darwino, Part 3

  • Jul 18, 2018

A good while back, I created a project structure for reforming my blog here in Darwino, but, as happens with low-priority side projects, it withered on the vine, untouched since then. Beyond just the "cobbler's children" aspect to it, I also lost steam due to a couple technology paths I initially headed down.

The first was basing the UI on Angular, which I've never really enjoyed working with. I'm sure I could have ended up with a decent result with it, but Angular always rubbed me the wrong way. And not just Angular: for a dead-simple UI like this, a full JS UI is just weird overkill.

The second was off in the other direction: I initially tried cramming a Rails app in the tree, which could be made to work, but it introduced so many weird edge cases outside of the problem at hand. That alone isn't the end of the world, but not much of what I'd have to solve to make that work would be transferrable elsewhere any time soon, so it'd end up a real time sink.

So, taking what I've learned since and the projects that I've been working on, I've decided to take another swing at it. Before I get into the implementation side, it will be useful to go over the technologies I did choose for the new form.

Java/Jakarta EE

I've recently become kind of enamored with the modern form of the Jakarta EE stack, and so I decided to use this as an opportunity to really dive in to what a blue-ocean small by-the-books Java app looks like nowadays.

JEE got a well-deserved bad rap over the years for its configuration complexity and general impenetrable-ness, but I've been very pleased to find that those tides have largely receded. It's all still there if you want it, but a fresh new app primarily consists of decorating a handful of Java classes with declarative annotations.

JEE consists of a series of individual specs, and building an app involves choosing which ones you want to use, plus (depending on which you choose) picking your app server target.

Tomcat

I originally gave a shot to adding enough OSGi metadata and bundles to target Domino, but decided quickly that it was just not worth it. The HTTP/servlet stack in Domino is just so old that, even if I got everything bound together, I'd still be fighting the platform every step of the way.

The better route was to put it aside and just run a modern Java app server. I went down the list of GlassFish, Payara, WebSphere Liberty (the nearest miss), TomEE, and WildFly, but each one ended up having some problem with either the dependencies I wanted or with their Eclipse integration. I ended up settling on good ol' reliable Tomcat. Tomcat itself isn't actually a JEE server, but it's kind of like a Raspberry Pi: it gives you the baseline for a Java servlet engine, and then you can cobble together your own EE stack on top of it by explicitly bringing in implementations. Though the final .war file is far less svelte this way, I found that this build-your-own method results in the lowest chance of being held back by the platform currently.

As an aside, Sven Hasselbach has been writing a very interesting series on running Jetty on top of the Domino JVM to achieve a similar end, albeit with Spring.

Darwino

For all the same reasons as when I set out on this journey originally, I'm using Darwino for the baseline. This lets me replicate in my existing blog data smoothly while getting the advantages of a superior backing database. I'm not making use of mobile clients or most Darwino services with this, but the baseline is nonetheless a step up, and fits in with a JEE app like a glove.

JNoSQL

I brought in the JNoSQL Darwino driver I wrote a little while ago to handle the model layer. JNoSQL is essentially JPA but reformed for NoSQL access - no cruft, no relational/NoSQL impedence mismatch, and designed to fit with current JEE technologies.

CDI

CDI is one such technology, and it's a very interesting one to work with. The whole "dependency injection" realm is a little fraught and, if my Eclipse UI error reporter is any indication, prone to bizarre errors, but the core concept is good and very useful. I've gotten it into the swing of using it both as the "managed bean" provider for the front end as well as the general service provider glue for the app. It still takes some getting used to, and the learning curve falls prey to a similar problem as when I was learning Maven: something about learning how it works makes you forget what it was like to not know, and so a lot of the answers online assume way more knowledge than a neophyte has.

Bean Validation

I've long been a fan of the Java bean validation API, and it's a clean fit here too: JNoSQL picks up on the presence of Hibernate Validator without configuration beyond the dependency and it just works. No muss, no fuss.

JAX-RS + MVC Spec

JAX-RS is at this point familiar territory for a lot of Domino developers, but I decided to use it as the underpinnings of the whole UI, in tandem with a draft framework called MVC 1.0. The latter's generic name doesn't really give much detail, but it's essentially a spec that enhances JAX-RS entities with knowledge of HTML templating frameworks, allowing you to write a very clear app structure. It's not a server-state-based framework like JSF, but rather a bit "closer to the metal", where you deal directly with the HTTP method cycle.

As I'll go more into in the "UI" post, it's been surprisingly refreshing to get back to basics in this way - JSF/XPages is often a bit conceptually easier to work with (at first) and client-side JS frameworks have some REST+JSON purity to them, but just "this server-rendered HTML page with no server state is everything you need" feels really good sometimes.

Admittedly, the MVC spec itself is in a weird place. It was originally a candidate for inclusion in Java EE 8, but was dropped in the final runup. It's possible that this will prove to be a kiss of death, but the spec is so small but functional that I don't feel bad about taking the risk of building an app on it.


That about covers the technology stack. When I get around to writing the next post, I'll go into some of the specifics about how I decided to set up the UI, which has been a fun experiment of its own. In the mean time, the active repository is up at:

https://github.com/jesse-gallagher/frostillic.us-Blog/tree/develop/frostillicus-blog