Lessons From Writing a JNoSQL Driver

  • Dec 30, 2017

The other day, I decided to start up a side project to write an app for my Stars Without Number game in Darwino. Like back when I wrote a forum/raiding app for my WoW guild, I like to use this kind of opportunity to try new technologies and flesh out my skills in existing ones.

One such tech I've had my eye on for a bit is JNoSQL, which is a framework for integrating with NoSQL databases in Java. It's along the lines of Hibernate OGM, but intended to avoid the pitfalls of the relational/NoSQL that came with trying to adapt JPA directly to NoSQL databases. JNoSQL promised to be much easier to implement for a new database, so I decided to give it a shot.

JNoSQL

JNoSQL is split into two paired components, cleverly named Diana (the driver side) and Artemis (the model/integration side). The task of writing a driver for a new database is pretty well-contained: pick the database type(s) you want to implement (out of key/value, column, document, and graph) and implement about half a dozen interfaces. This is in stark contrast from when I took a swing at writing a Hibernate OGM driver, where the task was significantly more daunting. The final result is only ten Java files, with a chunk of them being utility classes for code organization.

It's a young project - young enough that the best version to run right now is 0.0.4-SNAPSHOT - but it functions well and it's been taken under the wing of the Eclipse foundation, which builds some confidence.

Implementation

Though the task was small, there were still a couple initial hurdles to getting going.

To begin with, I decided to start with the Couchbase driver - this certainly made the overall task easier, since Couchbase's semantics are very similar to Darwino's, but it also meant that I had to be wary of which parts of the codebase were really about implementing a Diana driver and which were Couchbase-isms. Fortunately, this was much easier than the equivalent work when I adapted the CouchDB Hibernate OGM driver, which was a sprawling codebase by comparison.

More significantly, though, it's always tough coming in to modify a codebase written by a single person or small team and learning as you go. The structure of the code is clean, but not quite my normal style (in part because Domino kept me from diving into Java 8 streams for so long), and I also had to ramp up quickly on the internal concepts of Diana. Fortunately, this was mostly easy, since the document-DB driver scaffolding is purpose-built, the hooks are straightforward and the query semantics were extremely easy to adapt. The largest impediment was getting used to the use of the term "Document", which internally refers to a key/value pair, while "DocumentEntity" is closer to the expected meaning.

Like the core implementation, the test suite I adapted from Couchbase was also pleasantly svelte, covering the bases without being an onerous nightmare to convert. Indeed, most of the code I added to it was the Darwino app scaffolding just for the test runtime.

Putting It Into Practice

Once the driver was written, I was hit by a bit of a personal curveball when I went to implement some actual data models. The model side, Artemis, is heavily wrapped together with CDI, which is a Java EE thing that, as I gather, handles managed beans, separation of implementation, and variable injection. This is a pretty normal thing for Java EE developers, but XPages's "don't call it Java EE" environment didn't introduce me to this aspect. As such, the fact that the documentation just kind of casually tossed around CDI terms and annotations threw me for a bit of a loop trying to determine what was what was required and what was just an idiom.

I eventually determined that I could use the reference implementation, Weld, without necessarily going whole-hog on Java-EE-everything. I'm a bit wary of what this bodes for whether I'll be able to use JNoSQL in Darwino on mobile devices, but I'll cross that bridge when I come to it. Once I got a bit of a handle on what Weld is and how to use it in unit tests (hint: make sure you have beans.xml files!), I was able to start writing my model objects and testing them.

Doing It Again

The fact that the bulk of my implementation work ended up being on the app side with CDI goes to show that the Diana driver model really shines. It got me thinking about how difficult it would be in the future, say to write a driver for Domino. There'd be some hurdles - Domino's lack of nested objects and antiquated querying mechanisms would need replacing - but the core task wouldn't be too bad. I don't know if I'd have a need for it, but it's nice to keep in mind as potential future small project.

All in all, I'm optimistic about the use of this. I'd love for Darwino to integrate as smoothly as possible into whatever standard environments it can, and this is one more step in that direction. I'll know as my side app takes shape how much this ingrains itself into my actual work.

State of my Workspace 2017

  • Dec 28, 2017

Since the end of the year is a good time for recaps, I figured it could be fun and useful to look back to see how my development workspace and habits changed over the course of the year. One of the recurring pleasant side effects of working on Darwino is that it provides opportunities to dive into a wide array of tools and techniques, though my normal XPages development improved a bit too.

Eclipse Still Reigns

My primary IDE of choice remains Eclipse on the Mac. I've tried to like IntelliJ - I really have - and using Android Studio has given me a bit of appreciation for it, but the combination of inertia, its features, and the fact that IntelliJ feels more alien on the Mac have kept me with Eclipse. I do keep checking every milestone release notes page to see if they've improved the process of working with Tycho-based projects, though.

Text Editor Brawl

I've been a TextMate user since about when it came out, but the slowdown of development has taken its toll, and my eyes have started to wander. Its main replacement so far has been Sublime Text, which essentially feels like a snappier and more-modern TextMate, and it's suited well enough - I'm using it to write this post, for example. In recent weeks, though, I've also finally started giving Visual Studio Code a trial run for a React app I'm writing. Unsurprisingly, I'm finding it quite pleasant, and it's making a good play to take over as my default in the future.

The Markdown Editor Search May Have Reached Its Conclusion

The Darwino user documentation is written with Gitbook, which uses Markdown, and so I've been looking for a while for a comfortable environment for writing it. Each of the programmer text editors has good syntax support, and TextMate did some inline formatting, but the ones I use seem to either lack an inline preview pane or have one that doesn't work quite like I'd like. I used Haroopad for a while, but development petered out, and it was time to find another. I've recently found Typora - I'd thought at first that its semi-WYSIWYG editing in a single pane wasn't what I wanted, but it surprised me: it's outstanding. I have a few minor gripes, but I think I'm sold (or I will be once they release a for-money version).

Issue Trackers Remain Weird

I've gone through many iterations of how to track to-dos and issues for projects, and I've gone all-in on using integrated issue trackers in Git repos when available. The experience is never perfect, though: I don't like using browser tabs for these, but none of the native apps do quite what I want. One of the kickers is that not all of my projects are on GitHub, so either I need to check in two places or use something that bridges the gap.

Eclipse has Mylyn, which integrates with both GitHub and Bitbucket, but it's always a little janky about it. It does the job, though, and for a good while my solution was to have a separate Eclipse installation running geared entirely to issue tracking - no IDE functionality enabled, just a single window with the list of tasks. That worked kind of well, and I may return to it, but it never felt quite right.

For now, I've settled back on splitting up the two - checking Bitbucket via the web and using Ship as a mostly-native client for GitHub. The Ship UI is excellent enough to overcome my retience at the split workflow - the handling of milestones, Up Next, and whatnot make it a joy to use.

Aging Hardware Bristling With Drives

My main development machine remains the Late-2014 iMac 5K, which is nervously eyeing the iMac Pro page, but I've augmented it with a refurbished ThunderBay 4 mini to house my workspaces and VMs. I'll likely eventually convince myself to buy an iMac Pro, but for now this machine is still doing its job nicely.

Similarly, my gaming PC is still chugging along in its crazytown new case, and I've recently had a wild hair to cram it full of hard drives and group them with Storage Spaces. It's proving to have turned into a pretty nice NAS-alike and a capable workhorse for Plex serving, VR, and general gaming.

On the Chopping Block

I have a spate of apps that I use that I would love to trim down or replace if I can do so. Prime among those is Slack; while I like Slack for chat rooms better than the old standby of Skype, I'm hardly the first to notice what a hog it is, considering it's just a couple web pages. I've tried out Franz and Rambox a bit to tame the disaster zone of running Slack, Skype, Discord, and Microsoft Teams simultaneously, but they had some showstopping problems of one stripe or another. Still, I don't think the current setup can last another year.

SourceTree continues to be... kind of there. It's taken to crashing randomly every day or so, which isn't a huge impediment to working, but the notification dialog may as well say "hey, maybe it's time to give Git Tower a shot".

Parallels is still doing its job as my VM environment, but the advertizing for each successive version gets more and more desperate, and it's really been putting me off. Maybe it's time to make the switch to VMWare, especially since it seems like it took the performance crown in recent versions. Ideally, I wouldn't have to keep Windows running all the time at all, but for now it's a necessity.

First Steps to Code Coverage Analysis in Domino Plugins

  • Nov 9, 2017

I'm always interested in getting the computer to tell me how to tell it what to do more successfully, and, to further that pursuit, I've started taking an interest in code coverage.

If you're not familiar with the term, "code coverage" refers to reporting on which lines of code were actually executed during runtime, most commonly in association with unit tests. Eclipse (and presumably other IDEs) has support for this, and I've decided to give it a shot.

Since I'm starting this out in the context of Domino plugins, there are more wrinkles than in most tutorials. Namely, the test suites I've written run exclusively through Maven instead of the Eclipse UI due to all the Notes environment setup, so I can't just use the normal UI tools to gather the data. Fortunately, Eclipse's EclEmma will work just fine with the output from a Maven project, as long as you configure it properly. I looked around for a while to find the right combination of tools to use, but it ended up being fairly simple to configure basic output that can be consumed in Eclipse's Coverage view.

There are two main additions. First, add the jacoco-maven-plugin to your root project's project.build.plugins block:

<plugin>
	<groupId>org.jacoco</groupId>
	<artifactId>jacoco-maven-plugin</artifactId>
	<version>0.7.8</version>
	<executions>
		<execution>
			<goals>
				<goal>prepare-agent</goal>
			</goals>
		</execution>
	</executions>
</plugin>

In normal cases, that would suffice. However, since the test configuration I have for Notes overrides the argLine property of the Tycho test runner, there's another step - add the tycho.testArgLine property manually into those blocks, such as in the Windows profile:

<profile>
	<activation>
		<os>
			<family>Windows</family>
		</os>
		<property>
			<name>notes-program</name>
		</property>
	</activation>

	<build>
		<plugins>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-surefire-plugin</artifactId>
				<version>${tycho-version}</version>
 
				<configuration>
					<skip>false</skip>
 
					<argLine>${tycho.testArgLine} -Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
					<environmentVariables>
						<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
					</environmentVariables>
				</configuration>
			</plugin>
		</plugins>
	</build>
</profile>

Once that's configured, running the test suite via Maven will create a new file in the target folder of the test plugin: jacoco.exec. This file can then be consumed in Eclipse by opening the "Coverage" view:

Eclipse's Show View window

In that view, right click and choose "Import Session..." and point to the data file. Click "Next" and check the projects+source folders from your workspace you're interested in analyzing. When you click "Finish", it'll do two things. First, it'll fill the Coverage view with statistics from your run:

Code Coverage stats

(We have a lot of work to do fleshing out our test suites for this one)

Secondly, it'll start highlighting your code to show you what code is executed, which branches are only partially covered, and which lines are skipped entirely. For example (ignore the sickly color scheme - I need to work on that):

Code Coverage example

This shows how several of the if branches are only tested in one direction, while the "Faces" block is skipped entirely. That also shows some of the trouble with testing XPages-run code: the Tycho environment can't reproduce the XPages environment fully, so some branches aren't testable in that way. I haven't looked into the possibility of gathering similar data from JUnit for XPages, so perhaps that's possible.

For now, though, this will have to do. And, like with these other "code improvement" techniques I've integrated lately, there's a lot of potential tedium - juggling when to write a test to cover some code that will obviously always work just to improve the highlighting vs. just focusing on the low-hanging fruit - but I expect that it will be a nice addition to my workflow over time.

New Small Project: p2site-maven-plugin

  • Oct 26, 2017

It's no secret that I have a love/hate relationship with developing for OSGi platforms with Maven. The giant divide between "all-in" Tycho projects (which limit your options with normal Maven features) and trying to bolt on OSGi support in an otherwise-normal project creates an array of problems big and small.

Some of those hurdles would be difficult to bridge, such as any automated tests that want to test the proper functioning of OSGi services. However, not all projects need that - in the case of Darwino, for example, deployment to Domino is a secondary consideration in the Maven project, and so a Darwino app doesn't use Tycho for its packaging or testing. By jumping through a few hoops, we've gotten those projects to the point where they can emit a p2-formatted update site for use in OSGi, and that can be imported into a Domino NSF-based update site.

There's a minor caveat, though: because those update sites don't know about p2 formatting, you can't use the "Import Update Site" action, instead having to use "Import Features", which leaves the imported features in the "(Not Categorized)" group. This isn't a huge problem, but it's one that's easily fixed, so I wrote a small tool to do just that.

I've created a small open-source project called p2sitexml-maven-plugin, the purpose of which is to generate the site.xml file expected by Notes from a p2 repository generated by other means, such as the p2-maven-plugin. This can be included in a Maven build like so:

...
	<build>
		<plugins>
			...
			<plugin>
				<groupId>org.darwino</groupId>
				<artifactId>p2sitexml-maven-plugin</artifactId>
				<version>1.0.0</version>
				<executions>
					<execution>
						<goals>
							<goal>generate-site-xml</goal>
						</goals>
						<configuration>
							<category>Some Category</category>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
...

Right now, the plugin isn't in Maven Central, but is in OpenNTF's Maven server. You can add that to an active profile in your settings.xml file like so:

...
	<pluginRepositories>
		<pluginRepository>
			<id>artifactory.openntf.org</id>
			<name>artifactory.openntf.org</name>
			<url>https://artifactory.openntf.org/openntf</url>
		</pluginRepository>
	</pluginRepositories>
...

It isn't a world-changing thing, but this should at least make the task of targetting Domino with non-Tycho Maven projects a little easier.

Side-Project Monday Evening

  • Jun 27, 2017

Yesterday, in one of my various Slack chats, the topic of JShell - the Java 9 REPL - came up in the context of how useful it would be for XPages development. Being able to open up a "shell" into a running XPages application could be really useful in a lot of ways - and I think that the XPages Debug Toolbar has an SSJS-evaluate feature that would do something like this.

Still, it got me looking around a bit, and I ran across Groovysh Server, which is a project that combines Apache's SSH server with Groovy's REPL to make an interactive remote-login shell running in the context of a given JRE. It even comes with a Spring Framework binding, showing its utility for this sort of thing.

So I decided to see how easy it would be to adapt this into an XPages context, and the answer is "pretty easy". I created a new project called XPages Groovy Shell to do just this. It's an XSP Library that you can enable in an application to, when it's loaded (i.e. when someone visits it via the web), spawn an SSH server on the specified port to allow logins and evaluation of Groovy code using the app's ClassLoader.

Now, I don't expect this to be a real project necessarily - I have a lot of non-tinkering work on my plate - but it can serve as an interesting proof of concept. Still, as it is, it's not too far from being expanded to having some proper user authentication, and, with some mechanism to "break into" the Faces environment to work with existing bean instances, it could be really something. As it stands, take a look - it's not a lot of code, and the concepts could be useful elsewhere too.

Including a Headless DDE Build in a Maven Tree

  • Mar 14, 2017

Most of my Domino projects nowadays have two components: a suite of OSGi plugins/features and at least one NSF. Historically, I've kept the NSF part separate from the OSGi plugin projects - I'll keep the ODP in the repo, but then usually also keep a recent "build" made by copying the database from my dev server, and then include that built version in the result using the Maven Assembly plugin. This works, but it's not quite ideal: part of the benefit of having a Maven project being automatically built is that I can have a consistent, neutral environment doing the compilation, without reliance on my local Designer. Fortunately, Designer has a "headless" mode to build NSFs in a scripted way, and Christian Güdemann has done the legwork of building that into a Maven plugin.

It should come as no surprise, however, that this is a fiddly process, and I ran into a couple subtle problems when configuring my build.

Setting Up Designer

The first step is to tell Designer that you want to allow this use, which is done by setting DESIGNER_AUTO_ENABLED=true in your notes.ini. The second step is to configure Notes to use an ID file with no password: because Designer is going to be launched and quit automatically several times, you can't just leave it running and have it use an open session. This is a perfect opportunity to spin up a "template" ID file, distinct from your developer ID, if you haven't do so already. Also, uh... make sure that this user has at least Designer rights to the NSF it's constructing. I ran into a bit of logical trouble with that at first.

The last step was something I didn't realize until late: keep your Designer installation clean of the plugins you're going to be auto-installing. Ideally, Designer will be essentially a fresh install, with no plugins added, and then the Maven definition will list and install all dependencies. If it's not clean, you may run into trouble where Designer emits errors about the plugin conflicting with the installed version.

Setting Up The Maven Environment

Before getting to the actual Maven project files, there's some machine-specific information to set, which is best done with properties in your ~/.m2/settings.xml, much like the notes-platform and notes-program properties. In keeping with that convention, I named them as such:

<properties>
	<notes-platform>file:///C:/Users/jesse/Java/XPages</notes-platform>
	<notes-program>C:\Program Files (x86)\IBM\Notes</notes-program>
	<notes-designer>C:\Program Files (x86)\IBM\Notes\designer.exe</notes-designer>
	<notes-data>C:\Program Files (x86)\IBM\Notes\Data</notes-data>
</properties>

Deploying Features And Initial Root Project Config

The first came in setting up the automatic deployment of the feature. The Maven plugin lets you specify features that you want added to and then removed from your Designer installation. In this case, the feature and update site are within the same Maven tree being built, which adds a wrinkle or two.

The first is that, since the specific version number of the feature changes every build due to the qualifier, I had to set up the root project to export the qualifier value that Tycho plans to use. This is done using the tycho-packaging-plugin, which a standard Maven project will have loaded in the root project pom. The main change is to explicitly tell it to run the build-qualifier goal early on, which has the side effect of contributing a couple properties to the rest of the build:

<plugin>
	<groupId>org.eclipse.tycho</groupId>
	<artifactId>tycho-packaging-plugin</artifactId>
	<version>${tycho-version}</version>
	<configuration>
		<strictVersions>false</strictVersions>
	</configuration>

	<!-- Contribute the "buildQualifier" property to the environment -->
	<executions>
		<execution>
			<goals>
				<goal>build-qualifier</goal>
			</goals>
			<phase>validate</phase>
		</execution>
	</executions>
</plugin>

Once that's running, we'll have the ${qualifiedVersion} property to use down the line to house the actual version made during the build.

The second hurdle is figuring out the URL to use to point to the update site. I did this with a property in the root project pom, alongside setting to properties used by the Headless Designer plugin:

<properties>
	<!-- snip -->
	
	<!-- Headless Designer properties -->
	<designer.feature.url>${project.baseUri}../../releng/com.example.some.updatesite/target/site</designer.feature.url>
	<ddehd.designerexec>${notes-designer}</ddehd.designerexec>
	<ddehd.notesdata>${notes-data}</ddehd.notesdata>
</properties>

Much like with OSGi dependency repositories, this path is recomputed per-project. The NSF projects are housed within an nsf folder in my tree, so I include the ../.. to move up to the root project, before descending back down into the update site. Note that this requires that the update site project be built earlier in the build than the NSF.

Finally, bringing these together, I added a block for the common settings for the plugin to the pluginManagement section of the root project pom:

<plugin>
	<groupId>org.openntf.maven</groupId>
	<artifactId>headlessdesigner-maven-plugin</artifactId>
	<version>1.3.0</version>
	<extensions>true</extensions>
	<configuration>
		<features>
			<feature>
				<featureId>com.example.some.feature</featureId>
				<url>${designer.feature.url}</url>
				<version>${qualifiedVersion}</version>
			</feature>
		</features>
	</configuration>
</plugin>

Configuring The NSF Project

With most aspects configured higher up in the project tree, the actual NSF project pom is fairly slim:

<?xml version="1.0"?>
<project
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
	xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
	<modelVersion>4.0.0</modelVersion>
	<parent>
        <groupId>com.example</groupId>
        <artifactId>some-plugin</artifactId>
        <version>1.0.0-SNAPSHOT</version>
        <relativePath>../..</relativePath>
	</parent>
	<artifactId>nsf-somensf</artifactId>
	
	<packaging>domino-nsf</packaging>
	
	<properties>
		<ddehd.odpdirectory>${basedir}\..\..\..\nsf\nsf-somensf</ddehd.odpdirectory>
		<ddehd.targetdbname>somensf.ntf</ddehd.targetdbname>
	</properties>
	
	<build>
		<plugins>
			<plugin>
				<groupId>org.openntf.maven</groupId>
				<artifactId>headlessdesigner-maven-plugin</artifactId>
				<extensions>true</extensions>
			</plugin>
		</plugins>
	</build>
</project>

The properties block sets two more properties automatically read by the Headless Designer Maven plugin. In this case, the path is an artifact of the history of the Git repository: since the ODP was added to the repo outside of the Maven tree, the path backs up and out of the whole thing, and then to another folder with a confusingly-similar name. In this case, it avoids a lot of developer hassle, but a properly-configured project have the ODP in a subfolder within the Maven project (maybe src/main/odp if you want to be all idiomatic about it).

Note that the ddehd.targetdbname property is the NSF name used both for the intermediate build NSF (which is in the Notes data directory) and for the destination file in the project's target directory, so make sure it doesn't conflict with any existing DBs.

Bringing It All Together

Once you have the NSF built, you can include it in an Assembly down the line, leading to a nicely-packaged update site + NSF pair. This section is something of an "IOU" at the moment, though - I have an idea for how I want to do this, but I haven't actually implemented it yet. Once I do, I'll write a followup post.

In the mean time, having a build server build the NSF can be a useful check on manking sure everything is working correctly, and is a perfect stepping-stone towards a complete solution. Ideally, in addition to packaging up the result, a full system would also deploy the NSF and plugins to a Domino server and run some UI/service tests against it. However, that's a whole ball of wax that I haven't touched on myself (and is also likely prohibitive for licensing reasons in most cases anyway). For now, it's a step in the right direction.

That Java Thing, Part 17: My Current XPages Plug-in Dev Environment

  • Feb 26, 2017

It's been a while since I started this series on Java development, but I've been meaning for a bit now to crack it back open to discuss my current development setup for plug-ins, since it's changed a bit.

The biggest change is that, thanks to Serdar's work on the latest XPages SDK release, I now have Domino running plug-ins from my OS X Eclipse workspace. Previously, I switched between either running on the Mac and doing manual builds or slumming it in Eclipse in Windows. Having just the main Eclipse environment on the Mac is a surprising boost in developer happiness.

The other main change I've made is to rationalize my target platform configuration a bit. In the early parts of this series, I talked about adding the Update Site for Build Management to the active Target Platform and going from there. I still basically do this, but I'm a little more deliberate about it now. Instead of adding to the running platform, I now tend to create another platform just to avoid the temptation to use plug-ins that are from the surrounding modern Eclipse environment (this only really applies in my workspaces where I don't also have actual-Eclipse plug-in projects).

The fullest form of this occurs in one of my projects that has a private-only repo, which allows me to stash the artifacts I can't distribute publicly. In that case, I have a number of library dependencies beyond just the core XPages site, and I took the approach of writing a target platform definition file and storing it in the root project, with relative references to the packaged dependencies. With this route, I or another developer can just open the platform file and set it as the target platform - that will tell Eclipse about everything it needs. To do this, I right-clicked on the project, chose "New" → "Other..." and then "Target Definition" under "Plug-in Development":

Target Definition

Within that file, I used Eclipse variable references to point to the packaged dependencies. In this repo, there is a folder named "osgi-deps" next to the root Maven project, so I wanted to tell Eclipse to start at the root project, go up one level, and then delve down into there for each folder. I added "directory" type entries for each one:

Target Definition Entries

The reference syntax is ${workspace_loc:some-project-name}../osgi-deps/Whatever. workspace_loc resolves the absolute filesystem path of the named project within the workspace - since I don't know where the workspace will be, but I DO know the name of the project, this gets me a useful starting point. Each of those entries points to the root of a p2-format update site for the project. This setup will tell Eclipse everything it needs.

Unfortunately, this is a spot where Maven (or, more specifically, Tycho) adds a couple caveats: not only does Tycho not allow the use of "directory" type entries in a target platform file like this (meaning it can't be simply re-used), but it also expects repositories it points to to have p2 metadata and not just "plugins" and "features" folders or even a site.xml. So there's a bit of conversion involved. The good news is that Eclipse comes with a tool that will upgrade old-style update sites to p2 in-place; the bad news is that it's completely non-obvious. I have a script that I run to convert each new release of the Extension Library to this format, and I adapt it for each dependency I add:

java -jar
	/Applications/Eclipse/Eclipse.app/Contents/Eclipse/plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
	-application org.eclipse.equinox.p2.publisher.UpdateSitePublisher
	-metadataRepository file:///full/path/to/osgi-deps/ExtLib
	-artifactRepository file:///full/path/to/osgi-deps/ExtLib
	-source /full/path/to/osgi-deps/ExtLib/
	-compress -publishArtifacts

Running this for each directory will create the artifacts.jar and content.jar files Tycho needs to read the directories as repositories. The next step is to add these repositories to the root project pom so they can be resolved at build time. To start with, I create a <properties> entry in the pom to contain the base path for each folder:

<osgi-deps-path>${project.baseUri}../../../osgi-deps</osgi-deps-path>

There may be a better way to do this, but the extra "../.." in there is because this property is re-resolved for each project, and so "project.baseUri" becomes relative to each plugin, not the root project. Following the sort of best practice approach to Tycho layouts, the sub-modules in this project are in "bundles", "features", "releng", and "tests" folders, so the path needs to hop up an extra layer. With that, I add <repositories> entries for each in the same root pom:

<repositories>
    <repository>
        <id>notes</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/XPages</url>
    </repository>
    <repository>
        <id>oda</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/ODA</url>
    </repository>
    <repository>
        <id>extlib</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/ExtLib</url>
    </repository>
	<repository>
		<id>junit-xsp</id>
		<layout>p2</layout>
		<url>${osgi-deps-path}/org.openntf.junit.xsp.updatesite</url>
	</repository>
	<repository>
		<id>bazaar</id>
		<layout>p2</layout>
		<url>${osgi-deps-path}/XPagesBazaar</url>
	</repository>
	<repository>
		<id>eclipse-platform</id>
		<url>http://download.eclipse.org/releases/neon/</url>
		<layout>p2</layout>
	</repository>
</repositories>

The last entry is only needed if you have extra build-time dependencies to resolve - I use it to resolve JUnit 4.x, which for Eclipse I just tossed unstructured into a "plugins" folder in the "Misc" folder, without p2 metadata.

Though parts of this are annoyingly fiddly, it falls under the category of "worth it in the end" - after some initial trial and error, my target platform is more consistent and easier to share among multiple developers and automated build servers.

Slides From My Connect 2017 Presentations

  • Feb 24, 2017

At this year's Connect, Philippe Riand and I co-presented two sessions: one on ways to integrate your apps into the Connections UI and one on Darwino's role for Domino developers. I've uploaded the slides to SlideShare:

DEV-1430 - IBM Connections Integration: Exploring the Long List of Options

DEV-1467 - Give a New Life to Your Notes/Domino Applications and Leverage IBM Bluemix, Watson, & Connections (effectively, "the Darwino session")

The State of Domino App Dev Post-Connect-2017

  • Feb 24, 2017

I'm en route back from this year's IBM Connect in San Francisco, and this plane ride is giving me a good chance to chew over the implications for Domino developers.

First off, I'll put my bias in this matter right up front: Darwino, which I've been working on and discussing quite a bit, is one of the three "chosen" vendors for app enhancement/modernization/what-have-you. So, while this post isn't going to be about Darwino specifically, it's certainly pertinent for me. In any case, I'm aiming to speak exclusively as me personally here.

This event was the fated hour for the "app modernization" story promised over the course of the last year. In general, I'd summarize the pieces we have to pick up as (put as neutrally as possible):

  • The promised feature packs are coming along apace. The big-ticket items for the next two remain Java 8 (and a full refresh of the surrounding Java infrastructure following), exposing ID Vault and user-specific doc encryption to the lsxbe classes and XPages, an expansion of the ExtLib's DAS to support more PIM actions, and then misc. improvements (doc-level summary limit increase, some new @functions, and so forth).
  • A current version of the 9.0.1 ExtLib will be folded in to the main product in FP8, with the implication that that sort of thing may continue to happen. This brings some long-existing features like the Bootstrap renderkit and JDBC data sources into official support.
  • The implication is that Feature Packs will bring features more rapidly than a normal release schedule would.
  • Open-sourcing the UI components of XPages is still on the table.
  • The recently-released OpenNTF project SmartNSF is an encouraged way to write REST services in an NSF and is a candidate for inclusion in FP9 and, sooner, ExtLibX.
  • For modernization/mobile needs, IBM is providing a tool from Panagenda to analyze your existing apps and recommends the products from Aveedo, Sapho, and Darwino.

So... okay. Aside from Java 8 (which is a "rising tide lifts all boats" improvement), it seems like the focus on the additions to Domino is to encourage apps that use Domino rather than run on Domino. The additions to DAS are useful if you use Domino as your mail/calendar/RnR platform and want to integrate it with your other activities. SmartNSF smoothes the process of writing customized services to deal with NSF data in a more structured way than the raw DAS data service. The three encouraged "modernization" vendors each connect to or replicate data from (presumably old) Notes apps to expose it in a new UI, in two cases in order to use a "form builder"-type tool to make an easy app.

I see this as a codifying of the message from MWLUG: "learn something other than XPages". The improvements to the Java stack and various smaller changes will keep XPages apps running, but the focus is clearly not there. Nor is there an implication that there's a big "apps on Domino" revamp beyond the secondary effects of the OSGi update. So I think it's reasonable to consider XPages supported primarily in the "maintenance mode" sense. That stings, but it is what it is.

If you're currently working in XPages, there's no need to stop immediately or anything. You should, hoever, guide your development in the direction of being more adaptable elsewhere: heavier focus on writing REST services, much ligher focus on "Domino/XPages-isms" like embedding business logic right on a page with SSJS, and, if possible, getting used to toolchains like building OSGi libraries. Additionally, even if it's not immediately useful, I implore you: try out other environments. Spend a weekend with an Android or iOS tutorial, give Angular/Vue.js/React a shot in a test app, and so forth. The more you can learn another toolkit - any toolkit - the more you'll be comfortable with what's different elsewhere and what's the same.

It's always been important to do these things, but now it's required. No excuses - get out of your comfort zone.

As I have a chance, I'll be expanding on what Darwino's role is in all this, and shortly I'll be posting the slides from the sessions Philippe and I presented, one of which covered this topic. In the mean time, we're heading towards the weekend - this could be a perfect time to kick back and learn about something new. Maybe take a look at Swift if you haven't before. You don't have to form all of your future strategies right now - just learn a bit more every day.

Reforming the Blog in Darwino, Part 2

  • Feb 16, 2017

During the run-up to Connect next week, I turned my gaze back to my indefinite-term project of reforming this blog in Darwino.

When last I left it publicly, I had set up replication between a copy of the database and a Darwino app. After that post, I did a bit of tinkering in the direction of building a (J)Ruby on Rails front-end for it, next to the "j2ee" project. That side effort may bear fruit in time (as I recall, I got the embedded web app serving default pages, but didn't implement any blog-specific logic), but for now I decided to go for the "just get something running" route.

For that, the most expedient route was to write an Angular app using Darwino's stock document REST APIs. The (now unavailable) Bootstrap theme I use here came packaged with an Angular 1.x example and the Darwino demo apps are largely Angular 1.x as well, so most of the work was adapting what was there.

Unrelated to the front end, there was one change I realized I needed to make. In a fit of "psh, it's an XPages app; I don't need that old crap!", I structured the comments in the blog such that they're related to their post via a "PostID" field with the post UNID, not as actual response documents. While that would work just fine in the new form, this is a good opportunity to clean up the data a bit. Since I haven't (at least not yet) implemented a specific method in the DSL to say "this field is the real parent ID", I modified the Darwino adapter script to set the parent ID on outgoing data after normal conversion:

form('Comment') {
	field 'CommentID'
	field 'PostID'
	field '$$Creator', flags:[NAMES, MULTIPLE]
	field 'AuthorName'
	field 'AuthorEmailAddress'
	field 'AuthorURL'
	field 'Remote_Addr'
	field 'HTTP_User_Agent'
	field 'HTTP_Referer'
	field 'Posted', type:DATETIME
	field 'Body', type:RICHTEXT
  
	// Set the parent ID from the "PostID" field
  	events postConvertDominoToDarwino: { jsonHolder ->
      	jsonHolder.jsonObject.put("_parentid", jsonHolder.jsonObject.get("postid"))
  	}
}

The jsonHolder is a process object that contains the converted JSON to be sent to Darwino as well as a collection of the document's attachments and inline images. So, by setting the special "_parentid" field before the result is sent to the destination database, that value is used as the parent ID reference in the Darwino DB.

The other non-Angular addition I made was to add Gravatar support. This is an area where I'm currently doing it via a one-off utility class in the XPages app that spits out Gravatar images in an EL-compatible way. However, Darwino has a more idiomatic route: its built-in user directory/authentication system is extensible in a couple of ways, and one of those ways is to layer additional data providers on top of the primary directory.

For development purposes, my "directory" is just a static list of users specified in the darwino-beans.xml file, while it will presumably eventually point to my Domino server via LDAP to maintain consistent access. The basic static user bean looks like this:

<bean type="darwino/userdir" name="static" class="com.darwino.config.user.UserDirStatic" alias="demo,default">
	<property name="allowUnknownUsers">true</property>
	<list name="providers">
		<bean class="com.darwino.social.gravatar.GravatarUserProvider">
			<property name="imageSize">128</property>
		</bean>
	</list>
	<list name="users">
		<bean class=".User">
			<property name="dn">cn=Jesse,o=darwino</property>
			<property name="cn">Jesse</property>
			<property name="uid">jesse</property>
			<property name="email">jesse@darwino.com</property>
			<property name="password">secrets!</property>
			<list name="roles">
				<value>admin</value>
			</list>
			<list name="groups">
				<value>darwino</value>
			</list>
		</bean>
	</list>
</bean>

(please don't tell anyone my super-secret password)

The full syntax for Darwino beans is its own subject, but this instantiates a directory using the UserDirStatic class with a couple names - the "default" at the end means it'll be picked up by the stock configuration of a new app. The users are specified as instances of a nested class User with LDAP-like properties.

Separate from the specifics of the static user list, though, are the first two child elements: one tells the app that this directory should be consulted further even when a user doesn't exist in it, and the second instantiates a Gravatar user provider (which is in Darwino core). This user provider in turn tries to determine the user's email address - if the address is provided by the underlying directory, it uses that; otherwise, it tries the DN. These fallback behaviors come into play with comments: those users definitely wouldn't exist in the directory, but they DO have the email addresses entered during posting.

With this configuration in place, I can make image references like this:

<img src="$darwino-social/users/users/cn%3Djesse%2Co%3Ddarwino/content/photo" />

That runs through Darwino's stock social service to provide whatever image it can find from the provider - which in this case is a proxied-in Gravatar image.

So all that leaves now is the implementation of the front end. However, since this post is long enough and the code is currently an embarrassing mess, I'm going to punt on that for now and save it for later.

Connect 2017 Final Stretch

  • Feb 15, 2017

IBM Connect 2017 is less than a week away, and I've been furiously prepping for a couple parts of what is promising to be a busy conference.

On Monday, before the official kickoff of the conference, OpenNTF is co-hosting a Hackathon, where attendees will work on one of several potential projects. The goal is to learn about new development methods, work with new people, and hopefully kick off some useful open-source projects to boot.

During the conference proper, I'll be presenting two sessions, both alongside Philippe Riand:

On Wednesday at 10 AM, we'll be discussing IBM Connections integration - specifically, the numerous hooks provided by Connections locally and on the cloud for integrating your application as seamlessly as possible. That will be "IBM Connections Integration: Exploring the Long List of Options" in room 2006.

Then, on Thursday at 9 AM, we'll be discussing Darwino and its role integrating with and extending Domino applications. This should be a particularly-interesting one, covering what Darwino is, how its bidirectional replication with Domino works, and some example scenarios for reporting on and bringing forward Domino apps. That will be "Give a New Life to Your Notes/Domino Applications and Leverage IBM Bluemix, Watson and Connections" in room 2000.

Even with many of our usual community friends unable to make it to the conference or having moved on to other platforms, Connect is shaping up to be a worthwhile conference, and I'm very much looking forward to seeing everyone who is there!

December Is Self-Aggrandizement Month, Apparently

  • Dec 17, 2016

It's been a busy month (couple of years, really), but the last few weeks in particular have involved a couple minor announcements that I'm quite appreciative for.

On the 14th, IBM announced the 2017 class of IBM Champions for ICS, and they included me on the list. It's been a joy to be considered a Champion for the last few years, and 2017 promises to be an interesting year to continue that in our slice of the development world.

Mere days later, IBM sent out notifications about Connect 2017 sessions, and one of the abstracts I'm a co-presenter for was approved. I'll be presending DEV-1430: IBM Connections Integration: Exploring the Long List of Options with Philippe Riand.

And finally, I've taken up the daunting task of taking Peter Tanner's mantle as IP Manager at OpenNTF. Peter's work has been outstanding over the years (I've always appreciated the prodding to get my licensing ducks in a row), and I hope to be up to the task of replacing him when he retires at the end of the year.

The New Podcast is a Real Thing: WTF Tech Episode 1

  • Oct 31, 2016

As intimated at the end of the last This Week in Lotus, Stuart, Darren, and I have launched a new podcast in a similar vein: WTF Tech. Since we're all in the IBM sphere, that'll be the natural starting point for the topics we cover, but it's not going to be IBM-focused as such. For this first episode, we lucked out and had a couple-weeks period chock full of announcements, so we had plenty of material. Give it a listen!

Cramming Rails Into A Maven Tree

  • Sep 26, 2016

Because I'm me, one of the paths I'm investigating for my long-term blog-reformation project is seeing if I can get Ruby on Rails in there. I've been carrying a torch for the language and framework for forever, and so it'd be good to actually write a real thing in it for once.

This has been proving to be a very interesting thing to try to do well. Fortunately, the basics of "run Rails in a Java server" have been well worked out: the JRuby variant of the language is top-notch and the adorably-named Warbler project will take a Rails app and turn it into a JEE-style WAR file or self-hosting JAR. That still leaves, though, a few big tasks, in order of ascending difficulty:

  1. Cramming a Warbled Rails app into a Maven build
  2. Getting the Rails app to see the Java resources from the other parts of the tree
  3. Initializing Darwino tooling alongside Rails
  4. Making this pleasant to work with

So far, I've managed to get at least a "first draft" answer to the first three tasks.

Cramming a Warbled Rails app into a Maven build

When you get to the task of trying to do something unusual in Maven, the ideal case is that there will be a nice Maven plugin that will just do the job for you. Along those lines, I found a few things, ranging from a tool that will assist with making sure your Gems (Ruby dependencies) are handled nicely to one that outright proxies Gems into Maven dependencies. However, none that I found quite did the job, and so I fell back to the ol'-reliable second option: just shell out to the command line. That's not ideal (for reasons I'll get to below), but it works.

I ended up putting the Rails app into src/main/rails/blog and then using the exec-maven-plugin to do the Warbling for me:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>exec-maven-plugin</artifactId>
	<executions>
		<execution>
			<id>create-final-war</id>
			<phase>package</phase>
			<goals>
				<goal>exec</goal>
			</goals>
			<configuration>
				<executable>/bin/sh</executable>
				<workingDirectory>.</workingDirectory>
				<arguments>
					<argument>-c</argument>
					<argument>
						rm -f src/main/ruby/blog/*.war
						cd src/main/ruby/blog &amp;&amp; \
						jruby -S bundle install &amp;&amp; \
						jruby -S warble executable war &amp;&amp; \
						cd ../../../.. &amp;&amp;
						mv src/main/ruby/blog/*.war target/${project.build.finalName}.war
					</argument>
				</arguments>
			</configuration>
		</execution>
	</executions>
</plugin>

This amounts to a shell script that clears out any previous build, makes sure the dependencies are up to date (jruby -S bundle install), creates a self-hosting WAR file (jruby -S warble executable war), and then copies that result to the name expected by normal Maven WAR packaging. This basically works.

Getting the Rails app to see the Java resources from the other parts of the tree

Now that I had a properly-building WAR, my next task was to bring in any dependency JARs and in-project Java classes for use at runtime. Fortunately, this is a job that Warbler can handle, by way of its config/warble.rb file. In the root of the blog project, I ran warble config to generate this stub file. Like almost everything else in Rails, the configuration is done in Ruby, and this file is a large block of Ruby code examples, mostly commented out. I adjusted the lines to copy in the dependency JARs (which Maven, in a WAR package, will have previously copied for me) and to copy in any "loose" Java classes I may have alongside Rails in the current project:

config.java_libs += FileList["../../../../target/frostillicus-blog/WEB-INF/lib/*.jar"]
config.java_classes = FileList["../../../../target/frostillicus-blog/WEB-INF/classes/**/*"]
config.pathmaps.java_classes << "%{../../../../target/frostillicus-blog/WEB-INF/classes/,}p"

These lines use a helper class named FileList to glob the appropriate files from the project's target directory and copy them in. In the case of the loose classes, I also had to figure out how to clean up the path names - otherwise, it created a bizarre directory structure within the WAR.

With those lines in place, Warbler set up everything nicely - I could reference code from any of the dependencies, the other modules, or anything from the src/main/java folder within the same module.

Initializing Darwino tooling alongside Rails

The last step I got working is related to the previous one, but has a couple wrinkles. In addition to just having the Darwino classes available on the classpath, a Darwino application has an initialization lifecycle, done in a JEE app via filters defined in web.xml. It may also have some support files for defining beans and properties, which aren't covered by the same process as above. To start with the latter, I needed to figure out how I was going to get the files included in the "normal" JEE project's WEB-INF folder copied into the Rails WAR without destroying anything else. Fortunately, the same config file had a hook for that:

config.webinf_files += FileList["../../webapp/WEB-INF/**/*"] - ["../../webapp/WEB-INF/web.xml"]
config.pathmaps.webinf_files = ["%{../../webapp/WEB-INF/,}p"]

This one is basically the same as above, but with an important subtraction: I want to make sure to not copy the normal app's web.xml file in. If that's copied in, then Warbler will respectfully leave it alone, which would mean that the Rails portion of the app won't be launched. I'm handling that specially, so I made sure to use Ruby's "array subtraction" shorthand to make sure it's not included.

So that left modifying the web.xml itself, in order to serve two masters. Both Darwino and Rails expect certain filters to happen, and so I copied Warbler's web.xml.erb template into the config directory for modification. .erb is the designation for "embedded Ruby", and it's a technique Ruby tools frequently use for sprinkling a bit of Ruby logic into non-Ruby files, with a result that's similar to PHP and other full-powers templating languages. The resultant file is essentially a mix of the stock file created by Darwino Studio and the Warbler one, with some of the Darwino additions commented out in favor of the Rails stack.

Making this pleasant to work with

This final part is going to be the crux of it. Right now, the development process is a little cumbersome: the Rails app is essentially its own little universe, only fused with the surrounding Java code by the packaging process. That means that, even if I got a great Rails IDE, it wouldn't necessarily know anything about the surrounding Java code (unless they're smarter than I'd think). More importantly, the change/view-live loop is lengthy, since I have to make a change in the app and then re-run the Maven build and re-launch the embedded server. I lose the advantages both of Eclipse's built-in run-on-Tomcat capabilities as well as the normal Rails self-hosting hot-code-replace capabilities.

Fortunately, at least for now, the awkwardness of this toolchain may be primarily related to my lack of knowledge. If I can find a way to automate the Warbling inside Eclipse, that would go a tremendous way to making the whole thing a mostly-smooth experience. One potential route to this would be to create a Maven plugin to handle the conversion, and then include an m2e adapter to get it to conform to Eclipse's expectations. That would be a tremendous boon: not only would it be smoother to launch, but it would potentially gain the benefit of referencing workspace projects directly, lessening the need to worry about Maven installation. That would be a good chunk of work, but it's an area I'd like to dive into more eventually anyway.

In the mean time, the latest state of the conversion is up on GitHub for anyone curious:

https://github.com/jesse-gallagher/frostillic.us-Blog

Quick Post: Maven-izing the XSP Repo

  • Sep 17, 2016

This post follows in my tradition of extremely-narrow-use-case guides, but perhaps this will come in handy in some situations nonetheless.

Specifically, a while back, I wrote a script that "Maven-izes" the XPages artifacts, as provided by IBM's Update Site for Build Management. This may seem a bit counter-intuitive at first, since the entire point of that download is to be able to compile using Maven, but there's a catch to it: the repository is still in Eclipse ("P2") format, which requires that you use Tycho in your project. That's fine enough in most cases - since Domino-targetted projects are generally purely OSGi, it makes sense to have the full OSGi stack that Tycho provides. However, in a case where Domino is only one of many supported platforms, the restrictions that Tycho imposes on your project can be burdensome.

So, for those uses, I write a JRuby script that reads through the P2 site as downloaded and extracted from OpenNTF and generates best-it-can Maven artifacts out of each plugin. It tries to maintain the plugin names, some metadata (vendor, version, etc.), and dependency hierarchy, and the results seem pretty reliable, at least for the purpose of getting a non-Tycho bundle with XSP references to compile. This isn't necessarily a route you'd want to take in all cases (since you don't get the benefits of normal OSGi resolution and services in your compilation), but may make sense sometimes. In any event, if it's helpful, here you go:

https://github.com/jesse-gallagher/Miscellany/blob/master/UpdateSiteConversion/convert.rb