Rewriting The OpenNTF Site With Jakarta EE: UI

Jun 27, 2022, 3:06 PM

In what may be the last in this series for a bit, I'll talk about the current approach I'm taking for the UI for the new OpenNTF web site. This post will also tread ground I've covered before, when talking about the Jakarta MVC framework and JSP, but it never hurts to reinforce the pertinent aspects.

MVC

The entrypoint for the UI is Jakarta MVC, which is a framework that sits on top of JAX-RS. Unlike JSF or XPages, it leaves most app-structure duties to other components. This is due both to its young age (JSF predates and often gave rise to several things we've discussed so far) and its intent. It's "action-based", where you define an endpoint that takes an incoming HTTP request and produces a response, and generally won't have any server-side UI state. This is as opposed to JSF/XPages, where the core concept is the page you're working with and the page state generally exists across multiple requests.

Your starting point with MVC is a JAX-RS REST service marked with @Controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package webapp.controller;

import java.text.MessageFormat;

import bean.EncoderBean;
import jakarta.inject.Inject;
import jakarta.mvc.Controller;
import jakarta.mvc.Models;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.home.Page;

@Path("/pages")
public class PagesController {
    
    @Inject
    Models models;
    
    @Inject
    Page.Repository pageRepository;
    
    @Inject
    EncoderBean encoderBean;

    @Path("{pageId}")
    @GET
    @Produces(MediaType.TEXT_HTML)
    @Controller
    public String get(@PathParam("pageId") String pageId) {
        String key = encoderBean.cleanPageId(pageId);
        Page page = pageRepository.findBySubject(key)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Unable to find page for ID: {0}", key)));
        models.put("page", page); //$NON-NLS-1$
        return "page.jsp"; //$NON-NLS-1$
    }
}

In the NSF, this will respond to requests like /foo.nsf/xsp/app/pages/Some_Page_Name. Most of what is going on here is the same sort of thing we saw with normal REST services: the @Path, @GET, @Produces, and @PathParam are all normal JAX-RS, while @Inject uses the same CDI scaffolding I talked about in the last post.

MVC adds two things here: @Inject Models models and @Controller.

The Models object is conceptually a Map that houses variables that you can populate to be accessible via EL on the rendered page. You can think of this like viewScope or requestScope in XPages and is populated in something like the beforePageLoad phase. Here, I use the Models object to store the Page object I look up with JNoSQL.

The @Controller annotation marks a method or a class as participating in the MVC lifecycle. When placed on a class, it applies to all methods on the class, while placing it on a method specifically allows you to mix MVC and "normal" REST resources in the same class. Doing that would be useful if you want to, for example, provide HTML responses to browsers and JSON responses to API clients at the same resource URL.

When a resource method is marked for MVC use, it can return a string that represents either a page to render or a redirection in the form "redirect:some/resource". Here, it's hard-coded to use "page.jsp", but in another situation it could programmatically switch between different pages based on the content of the request or state of the app.

While this looks fairly clean on its own, it's important to bear in mind both the strengths and weaknesses of this approach. I think it will work here, as it does for my blog, because the OpenNTF site isn't heavy on interactive forms. When dealing with forms in MVC, you'll have to have another endpoint to listen for @POST (or other verbs with a shim), process that request from scratch, and return a new page. For example, from the XPages JEE example app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Path("create")
@POST
@Consumes(MediaType.APPLICATION_FORM_URLENCODED)
@Controller
public String createPerson(
        @FormParam("firstName") @NotEmpty String firstName,
        @FormParam("lastName") String lastName,
        @FormParam("birthday") String birthday,
        @FormParam("favoriteTime") String favoriteTime,
        @FormParam("added") String added,
        @FormParam("customProperty") String customProperty
) {
    Person person = new Person();
    composePerson(person, firstName, lastName, birthday, favoriteTime, added, customProperty);
    
    personRepository.save(person);
    return "redirect:nosql/list";
}

That's already fiddlier than the XPages version, where you'd bind fields right to bean/document properties, and it gets potentially more complicated from there. In general, the more form-based your app is, the better a fit XPages/JSF is.

JSP

While MVC isn't intrinsically tied to JSP (it ships with several view engine hooks and you can write your own), JSP has the advantage of being built in to all Java webapp servers and is very well fit to purpose. When writing JSPs for MVC, the default location is to put them in WEB-INF/views, which is beneath WebContent in an NSF project:

Screenshot of JSPs in an NSF

The "tags" there are the general equivalent of XPages Custom Controls, and their presence in WEB-INF/tags is convention. An example page (the one used above) will tend to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<%@page contentType="text/html" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" session="false" %>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<t:layout>
    <turbo-frame id="page-content-${page.linkId}">
        <div>
            ${page.html}
        </div>
        
        <c:if test="${not empty page.childPageIds}">
            <div class="tab-container">
                <c:forEach items="${page.cleanChildPageIds}" var="pageId" varStatus="pageLoop">
                    <input type="radio" id="tab${pageLoop.index}" name="tab-group" ${pageLoop.index == 0 ? 'checked="checked"' : ''} />
                    <label for="tab${pageLoop.index}">${fn:escapeXml(encoder.cleanPageId(pageId))}</label>
                </c:forEach>
                    
                <div class="tabs">
                    <c:forEach items="${page.cleanChildPageIds}" var="pageId">
                        <turbo-frame id="page-content-${pageId}" src="xsp/app/pages/${encoder.urlEncode(pageId)}" class="tab" loading="lazy">
                        </turbo-frame>
                    </c:forEach>
                </div>
            </div>
        </c:if>
    </turbo-frame>
</t:layout>

There are, by shared lineage and concept, a lot of similarities with an XPage here. The first four lines of preamble boilerplate are pretty similar to the kind of stuff you'd see in an <xp:view/> element to set up your namespaces and page options. The tag prefixing is the same idea, where <t:layout/> refers to the "layout" custom tag in the NSF and <c:forEach/> refers to a core control tag that ships with the standard tag library, JSTL. The <turbo-frame/> business isn't JSP - I'll deal with that later.

The bits of EL here - all wrapped in ${...} - are from Expression Language 4.0, which is the current version of XPages's aging EL. On this page, the expressions are able to resolve variables that we explicitly put in the Models object, such as page, as well as CDI beans with the @Named annotation, such as encoderBean. There are also a number of implicit objects like request, but they're not used here.

In general, this is safely thought of as an XPage where you make everything load-time-bound and set viewState="nostate". The same sorts of concepts are all there, but there's no concept of a persistent component that you interact with. Any links, buttons, and scripts will all go to the server as a fresh request, not modifying an existing page. You can work with application and session scopes, but there's no "view" scope.

Hotwired Turbo

Though this app doesn't have much need for a lot of XPages's capabilities, I do like a few components even for a mostly "read-only" app. In particular, the <xe:djContentPane/> and <xe:djTabContainer/> controls have the delightful capability of deferring evaluation of their contents to later requests. This is a powerful way to speed up initial page load and, in the case of the tab container, skip needing to render parts of the page the user never uses.

For this and a couple other uses, I'm a fan of Hotwired Turbo, which is a library that grew out of 37 Signals's Rails-based development. The goal of Turbo and the other Hotwired components is to keep the benefits of server-based HTML rendering while mixing in a lot of the niceties of JS-run apps. There are two things that Turbo is doing so far in this app.

The first capability is dubbed "Turbo Drive", and it's sort of a freebie: you enable it for your app, tell it what is considered the app's base URL, and then it will turn any in-app links into "partial refresh" links: it downloads the page in the background and replaces just the changed part on the page. Though this is technically doing more work than a normal browser navigation, it ends up being faster for the user interface. And, since it also updates the URL to match the destination page and doesn't require manual modification of links, it's a drop-in upgrade that will also degrade gracefully if JavaScript isn't enabled.

The second capability is <turbo-frame/> up there, and it takes a bit more buy-in to the JS framework in your app design. The way I'm using Turbo Frames here is to support the page structure of OpenNTF, which is geared around a "primary" page as well as zero or more referenced pages that show up in tabs. Here, I'm buying in to Turbo Frames by surrounding the whole page in a <turbo-frame/> element with an id using the page's key, and then I reference each "sub-page" in a tab with that same ID. When loading the frame, Turbo makes a call to the src page, finds the element with the matching id value, and drops it in place inside the main document. The loading="lazy" parameter means that it defers loading until the frame is visible in the browser, which is handy when using the HTML/CSS-based tabs I have here.

I've been using this library for a while now, and I've been quite pleased. Though it was created for use with Rails, the design is independent of the server implementation, and the idioms fit perfectly with this sort of Java app too.

Conclusion

I think that wraps it up for now. As things progress, I may have more to add to this series, but my hope is that the app doesn't have to get much more complicated than the sort of stuff seen in this series. There are certainly big parts to tackle (like creating and managing projects), but I plan to do that by composing these elements. I remain delighted with this mode of NSF-based app development, and look forward to writing more clean, semi-declarative code in this vein.

Rewriting The OpenNTF Site With Jakarta EE: Beans

Jun 24, 2022, 5:03 PM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans

Now that I've covered the basics of REST services and data access in the new OpenNTF web site, I'll dive a bit into the use of CDI for beans. The two previous topics implied some of the deeper work of CDI, with the @Inject annotation being used by CDI to supply bean and proxy values, but in those cases it was fine to just assume what it was doing.

CDI itself - Contexts and Dependency Injection - contains more capabilities than I'll cover here. Some of them, like its event/observer system, are things that I'll probably end up using in this app, but haven't made their way in yet. For now, I'll talk about the basic "managed beans" level and then build to the way Jakarta NoSQL uses its proxy-bean capabilities.

Managed Beans

In the OpenNTF site, I use a couple beans, some to provide scoped state and some to provide "services" for the app. I'll start with one of the simpler ones, a bean used to convert Markdown to HTML using CommonMark. I use a more-complicated version of this bean in my blog, but for now the OpenNTF one is small:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
package bean;

import org.commonmark.node.Node;
import org.commonmark.parser.Parser;
import org.commonmark.renderer.html.HtmlRenderer;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Named;

@ApplicationScoped
@Named("markdown")
public class MarkdownBean {
    private Parser markdown = Parser.builder().build();
    private HtmlRenderer markdownHtml = HtmlRenderer.builder()
            .build();

    public String toHtml(final String text) {
        Node parsed = markdown.parse(text);
        return markdownHtml.render(parsed);
    }
}

The core concepts here are exactly the same as you have with XPages Managed Beans. The "bean" itself is just a Java object and doesn't need to have any particular special characteristics other than, if it's stored in a serialized context, being Serializable or otherwise storable. The only difference here for that purpose is that, rather than being configured in faces-config.xml, the bean attributes are defined inline (there's a "beans.xml" for explicit definitions, but it's not needed in common cases). Here, the @ApplicationScoped annotation will cover its scope and the @Named annotation will allow it to be addressable by name in contexts like JSP or XPages. A CDI bean doesn't have to be named, but it's common in cases where the bean will be used in the UI.

Once a bean is defined, the most common way to use it is to use the @Inject annotation on another CDI-capable class, such as another bean or a JAX-RS resource. For example, it could be injected into a controller class like:

1
2
3
4
5
6
7
8
@Path("/blog")
@Controller
public class BlogController {
    @Inject
    private MarkdownBean markdown;

    // (snip)
}

CDI will handle the dirty business of making sure the field is populated, and that all scopes are respected. You can also retrieve a bean programmatically, with just a bit of gangliness:

1
MarkdownBean markdown = CDI.current().select(MarkdownBean.class).get();

You can think of that one as roughly equivalent to ExtLibUtil.resolveVariable(...).

By default, CDI comes with a few main scopes for our normal use: @ApplicationScoped, @SessionScoped, @RequestScoped, and @ConversationScoped. The last one is a bit weird: it kind of covers whatever your framework considers a "conversation". It's kind of like the view scope in XPages, and in the XPages JEE support project I mapped it to that, but it could also potentially be a conversation between distinct pages in an app. JSF, for its part, has its own @ViewScoped annotation, and I'm considering stealing or reproducing that.

That touches on the last bit I'll mention for this "basic" section of CDI: scope definitions. Though CDI comes with a handful of standard scopes, they're defined in a way that users can use. You could, for example, make a @InvoicingScope to cover beans that exist for the duration of a billing process, and then you'd managed initiating and terminating the scope yourself. Usually, this isn't necessary or particularly useful, but it's good to know it's there.

Producer Methods

The next level of this is the ability of a bean to programmatically produce beans for downstream use. By this I mean that a bean's method can be annotated with @Produces, and then it can provide a type to be matched elsewhere. In the OpenNTF app, I use this as a way to delay loading of a resource bundle until it's actually used:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
package bean;

import java.util.ResourceBundle;

import jakarta.enterprise.context.RequestScoped;
import jakarta.enterprise.inject.Produces;
import jakarta.inject.Inject;
import jakarta.inject.Named;
import jakarta.servlet.http.HttpServletRequest;

@RequestScoped
public class TranslationBean {
    @Inject
    HttpServletRequest request;

    @Produces @Named("translation")
    public ResourceBundle getTranslation() {
        return ResourceBundle.getBundle("translation", request.getLocale()); //$NON-NLS-1$
    }
}

Here, TranslationBean itself exists as a request-scoped bean and can be used programmatically, but it's really a shell for delayed retrieval of a ResourceBundle named "translation" for use in the UI. This allows me to use the built-in mapping behavior of ResourceBundle in Expression Language when writing bits of JSP like <p>${translation.copyright}</p>.

You can get more complicated than this, for sure. For example, if I switch the UI of this app to XPages, I may do a replacement of my classic controller framework that uses such a producer bean instead of the ViewHandler I used in the original implementation.

Proxy Beans

Finally, I'll talk a bit about dynamically-created proxy beans.

CDI's implementations make heavy use of object proxies to do their work. Technically, injected objects are proxies themselves, which allows CDI to let you do stuff like inject a @RequestScoped bean into an @ApplicationScoped one. But the weird part of CDI I plan to talk about here is the use of proxies to provide an object for an interface that doesn't have any implementation class.

I've mentioned this sort of injection a few times:

1
2
3
4
5
6
@Path("/pages")
public class PagesController {
    @Inject
    Page.Repository pageRepository;

    // snip

And then the interface is just:

1
2
3
4
@RepositoryProvider("homeRepository")
public interface Repository extends DominoRepository<Page, String> {
    Optional<Page> findBySubject(String subject);
}

There's no class that implements Page.Repository, so how come you can call methods on it? That's where the proxying comes in. While the CDI container (in this case, our NSF-based app) is being initialized, the Domino JNoSQL driver looks for classes implementing DominoRepository:

1
2
3
4
5
6
7
8
9
<T extends DominoRepository> void onProcessAnnotatedType(@Observes final ProcessAnnotatedType<T> repo) {
    Class<T> javaClass = repo.getAnnotatedType().getJavaClass();
    if (DominoRepository.class.equals(javaClass)) {
        return;
    }
    if (DominoRepository.class.isAssignableFrom(javaClass) && Modifier.isInterface(javaClass.getModifiers())) {
        crudTypes.add(javaClass);
    }
}

Then, once they're all found, it registers a special kind of bean for them:

1
2
3
void onAfterBeanDiscovery(@Observes final AfterBeanDiscovery afterBeanDiscovery, final BeanManager beanManager) {
    crudTypes.forEach(type -> afterBeanDiscovery.addBean(new DominoRepositoryBean(type, beanManager)));
}

I mentioned above that beans are generally just normal Java classes, but you can also make beans by implementing jakarta.enterprise.inject.spi.Bean, which gives you programmatic control over many aspects of the bean, including providing the actual implementation of them. In the Domino driver's case, as in most/all of the JNoSQL drivers, this is done by providing a proxy object:

1
2
3
4
5
6
7
public DominoRepository<?, ?> create(CreationalContext<DominoRepository<?, ?>> creationalContext) {
    DominoTemplate template = /* Instance of a DominoTemplate, which handles CRUD operations */;
    Repository<Object, Object> repository = /* JNoSQL's default Repository */;

    DominoDocumentRepositoryProxy<DominoRepository<?, ?>> handler = new DominoDocumentRepositoryProxy<>(template, this.type, repository);
    return (DominoRepository<?, ?>) Proxy.newProxyInstance(type.getClassLoader(), new Class[] { type }, handler);
}

Finally, that proxy class implements java.lang.reflect.InvocationHandler, which lets it provide custom handling of incoming methods.

This well goes deep, including the way JNoSQL will parse out method names and parameters to handle queries, but I think that will suffice for now. The important thing to know is that this is possible to do, common in underlying frameworks, and fairly rare in application code.

Next Up

I'm winding down on major topics, but at least critical one remains: the actual UI. Currently (and likely when shipping), the app uses MVC and JSP to cover this need. I've discussed these before, but I think it'll be useful to do so again, both as a refresher and to show how they bring these other parts of the app together.

Rewriting The OpenNTF Site With Jakarta EE: Data Access

Jun 21, 2022, 10:12 AM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans

In my last post, I talked about how I make use of Jakarta REST to handle the REST services in the new OpenNTF site I'm working on. There'll be more to talk about on that front when I get to the UI and my use of MVC. For now, though, I'll dive a bit into how I'm accessing NSF data.

I've been talking a lot lately about how I've been fleshing out the Jakarta NoSQL driver for Domino that comes as part of the XPages JEE project, and specifically how writing this app has proven to be an ideal impetus for adding specific capabilities that are needed for working with Domino. This demonstrates some of the fruit of that labor.

Model Objects

There are a few ways to interact with Jakarta NoSQL, and they vary a bit by database type (key/value, column, document, graph), but I focus on using the Repository interface capability, which is a high-level abstraction over the pool of documents.

Before I get to that, though, I'll start with an entity object. Part of the heavy lifting that a framework like Jakarta NoSQL does is to map between a Java class and the actual data representation. In the SQL world, one would likely come across the term object-relational mapping for this, and the concept is generally the same. The project currently has a handful of such classes, and so the data layer looks like this:

Screenshot of Designer showing the data-related classes in the NSF

The mechanism for mapping a class in JNoSQL is very similar to JPA:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@Entity("Release")
public class ProjectRelease {
    
    public enum ReleaseStatus {
        Yes, No
    }
    
    @Id
    private String documentId;
    @Column("ProjectName")
    private String projectName;
    @Column("ReleaseNumber")
    private String version;
    @Column("ReleaseDate")
    private Temporal releaseDate;
    @Column("WhatsNewAbstract")
    private String description;
    @Column("DownloadsRelease")
    private int downloadCount;
    @Column("MainID")
    private String mainId;
    @Column("ReleaseInCatalog")
    private ReleaseStatus releaseStatus;
    @Column("DocAuthors")
    private List<String> docAuthors;
    @Column(DominoConstants.FIELD_ATTACHMENTS)
    private List<EntityAttachment> attachments;

    /* getters/setters and utility methods here */
}

@Entity("Release") at the top there declares that this class is a JNoSQL entity, and then the Domino driver uses "Release" as the form name when creating documents and performing queries.

The @Id and @Column("...") annotations map Java object properties to fields and attributes on the document. @Id populates the field with the document's UNID, while @Column does a named field. There's a special one there - @Column(DominoConstants.FIELD_ATTACHMENTS) - that will populate the field with references to the document's attachments when present. In each of these cases, all of the heavy lifting is done by the driver: there's no code in the app that manually accesses documents or views.

Repositories

The way I get access to documents mapped by these classes is to use the JNoSQL Repository mechanism, by way of the extended DominoRepository interface. They look like this (used here as an inner class for stylistic reasons, not technical ones):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
@Entity("Release")
public class ProjectRelease {

    @RepositoryProvider("projectsRepository")
    public interface Repository extends DominoRepository<ProjectRelease, String> {
        Stream<ProjectRelease> findByProjectName(String projectName, Sorts sorts);

        @ViewEntries("ReleasesByDate")
        Stream<ProjectRelease> findRecent(Pagination pagination);
        
        @ViewDocuments("IP Management\\Pending Releases")
        Stream<ProjectRelease> findPendingReleases();
    }

    /* snip: entity class from above */
}

Merely by creating this interface, I'm able to get access to the associated documents: I don't actually have to implement it myself. As seen in the last post, these interfaces can be injected into a bean or REST resource using CDI:

1
2
3
4
5
6
7
public class IPProjectsResource {
    
    @Inject
    private ProjectRelease.Repository projectReleases;

    /* snip */
}

Naturally, there is implementation code for this repository, but it's all done with what amounts to "Java magic": proxy objects and CDI. That's a huge topic on its own, and it's pretty weird to realize that that's even possible, but it will have to suffice for now to say that it is possible and it works great.

When you create one of these repositories, you get basic CRUD capabilities "for free": you can create new documents, look up existing documents by ID, and modify or delete existing documents.

Basic Queries

Beyond that, JNoSQL will do some lifting for you to give sensical implementations for methods based on their method signature in the absence of any driver-specific code. I'm making use of that here with findByProjectName(String projectName, Sorts sorts). The proxy object that provides this implementation is able to glean that String projectName refers to the projectName field of the ProjectRelease class, which is then mapped by annotation to the ProjectName item on the back end. The Sorts object is a JNoSQL type that allows you to specify one or more sort columns and their orders. When executed, this is translated to a DQL query like:

1
Form = 'ProjectRelease' and ProjectName = 'Some Project'

When Sorts are specified, this is also run through QueryResultsProcessor to create a QRP view with the given sort columns in a local temp database. Thanks to that, running the same query multiple times when the data hasn't changed will be very speedy.

You can customize these queries further by adding more parameters, or by using the @Query annotation to provide a SQL-like query with parameters.

Domino-Specific Queries

Since Domino is so view-heavy and DQL+QRP isn't quite at the level where you can just throw any old query+extraction at it and expect it to perform well, it made sense for me to add extensions to JNoSQL to explicitly target views as sources. I use them both here, in one case to efficiently retrieve view data without opening documents and in another in order to piggyback on an existing view used by the IP Tools services already deployed.

The @ViewEntries("ReleasesByDate") annotation causes the findRecent annotation to skip JNoSQL's normal interpretation of the method and instead be handled by the Domino driver directly. It will open that view and read entries based on the Pagination rules sent to it (another JNoSQL object). Since the columns in this view line up to the item names in the documents, I'm able to get useful entity objects out if it without having to actually crack open the docs. In practice, I'll need to be careful when using this so as to not save entities like this back into the database, since not ALL columns are present in the view, but that's a reasonable caveat to have.

The @ViewDocuments("IP Management\\Pending Releases") annotation causes findPendingReleases to read full documents out of the named view, ignoring view columns. Eventually, I'll likely replace this with an equivalent query in JNoSQL's dialect, but for now it's more practical to just use the existing view like a stored query and not have to translate the selection formula to another mechanism.

Repository Provider

The last thing to touch on with this repository is the @RepositoryProvider annotation. The OpenNTF web site is stored in its own NSF, and then references several other NSFs, such as the projects DB, the blog DB (which is still based on BlogSphere), and the patron directory. The @RepositoryProvider annotation allows me to tell JNoSQL to use a different database than the current one, and it does so by finding a matching CDI producer method that gives it a lotus.domino.Database housing the documents and a high-privilege lotus.domino.Session to create QRP views. In this app's case, that's this in another bean:

1
2
3
4
5
6
7
8
@Produces
@jakarta.nosql.mapping.Database(value = DatabaseType.DOCUMENT, provider = "projectsRepository")
public DominoDocumentCollectionManager getProjectsManager() {
    return new DefaultDominoDocumentCollectionManager(
        () -> getProjectsDatabase(),
        () -> getSessionAsSigner()
    );
}

I'll touch on what the heck a @Produces method is in CDI later, but for now you can take it for granted that this works. The getProjectsDatabase() method that it calls is a utility method that opens the project DB based on some configuration documents.

I'll note with no small amount of pleasure that this bean that provides databases is one of the only two places in the app that actually reference Domino API classes at all, and the other instance is just to convert Notes names. I'm considering ways to remove this need as well, perhaps making it so that this producer only needs to provide a path to the target database and the name of a high-privilege user to act as, and then the driver would do the session creation and DB opening itself.

Next Up

In the next post, I'll most likely talk about my use of CDI to handle the "managed beans" layer. In a lot of ways, that will just be demonstrating the way CDI makes the tasks you'd otherwise accomplish with XPages Managed Beans simpler and more code-focused, but (as the @Produces annotation above implies) there's a lot more to it.

Rewriting The OpenNTF Site With Jakarta EE: REST

Jun 20, 2022, 1:09 PM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans

In deciding how to kick off implementation specifics of my new OpenNTF site project, I had a few options, and none of them perfect. I considered starting with the managed beans via CDI, but most of those are actually either UI support beans or interact primarily with other components. I ended up deciding to talk a bit about the REST services in the app, since those are both an extremely-common task to perform in XPages and one where the JEE project runs laps around what you get by default from Domino.

The REST layer is handled by Jakarta REST, which is still primarily called by its old name JAX-RS. JAX-RS has existed in Domino for a good while via the Wink implementation included with the Extension Library, but that's a much-older version. Additionally, that implementation didn't include a lot of convenience features like automatic JSON conversion out of the box. The implementation in the XPages JEE Support project uses RESTEasy, which is one of the primary active implementations and covers the latest versions of the spec.

Example

Though the primary way JAX-RS is actually used in this app is as the backbone for the UI with MVC, that'll be a topic for later. Since I also plan to use this as a way to modernize the IP Management tools I wrote, I'm making some JSON-based services for that.

I have a service that lets me get a list of project releases that haven't yet been approved, as well as an endpoint to mark one as approved. That class looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
package webapp.resources.iptools;

import java.text.MessageFormat;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;

import jakarta.annotation.security.RolesAllowed;
import jakarta.inject.Inject;
import jakarta.validation.constraints.NotEmpty;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import model.projects.ProjectRelease;

@Path("iptools/projects")
@RolesAllowed("[IPManager]")
public class IPProjectsResource {
    
    @Inject
    private ProjectRelease.Repository projectReleases;
    
    @GET
    @Path("pendingReleases")
    @Produces(MediaType.APPLICATION_JSON)
    public Map<String, Object> getPendingReleases() {
        return Collections.singletonMap("payload", projectReleases.findPendingReleases().collect(Collectors.toList()));
    }
    
    @POST
    @Path("releases/{documentId}/approve")
    @Produces(MediaType.APPLICATION_JSON)
    public boolean approveRelease(@PathParam("documentId") @NotEmpty String documentId) {
        ProjectRelease release = projectReleases.findById(documentId)
            .orElseThrow(() -> new NotFoundException(MessageFormat.format("Could not find project for UNID {0}", documentId)));
        release.markApprovedForCatalog(true);
        projectReleases.save(release);
        
        return true;
    }
}

We can ignore the ProjectRelease.Repository business, since that's the model objects making use of Jakarta NoSQL - that'll be for later. For now, we can just assume that methods like findPendingReleases and findById do what you might assume based on their names.

The resource as a whole is marked as available at the path iptools/projects. In an NSF, that will resolve to a path on the server like /foo.nsf/xsp/app/iptools/projects. The "app" part there is customizable, though the "xsp" part is unchangeable, at least for now: it's the way the XPages stack notices that it's supposed to handle this URL instead of passing it to the classic Domino web server side.

The @RolesAllowed annotation allows me to restrict use of all the methods in this resource to specific roles or names/globs from the ACL. Though the underlying documents will still be protected by the ACL and reader/author fields, it's still good practice to not make services publicly available unless there's a reason to do so.

Often, a resource class like this will have a method marked with @GET but no @Path annotation, which would match the base URL from the class level. That isn't the case here, though: I may eventually merge these methods into an overall projects API, but for now I'm mirroring the old one I made, which doesn't have that.

JSON Conversion

The getPendingReleases method shows off a nice advantage over the older way I was doing this. In the original app, I had a utility class that used Gson to process arbitrary objects and convert them to JSON. Here, since I'm working on top of the whole JEE framework, I don't have to care about that in the app. I can just return my payload object and know that the scaffolding beneath me will handle the fiddly details of translating it to JSON for the browser, based on the @Produces(MediaType.APPLICATION_JSON) annotation there. It happens to use Jakarta JSON Binding (JSON-B), but I don't have to know that. I can just be confident that it will emit JSON representing the documents in a predictable way.

Entity Manipulation

The approveRelease method is available with a URL like /foo.nsf/xsp/app/iptools/projects/releases/12345678901234567890123456789012/approve. With the UNID from the path, I call projectReleases.findById to find the release document with that ID. That method returns an Optional<ProjectRelease> to cover the case that it doesn't exist - the orElseThrow method of Optional allows me to "unwrap" it when present or otherwise throw a NotFoundException. In turn, that exception (part of JAX-RS) will be translated to an HTTP 404 response with the provided message.

I used a @NotEmpty annotation on the @PathParam parameter here since this would currently also match a URL like /foo.nsf/xsp/app/iptools/projects/releases//approve. While I could check for an empty ID, this is a little cleaner and can provide a better error message to the calling user. That's just another nice way to make use of the underlying stack to get better behavior with less code.

The markApprovedForCatalog method on the model object just handles setting a couple fields:

1
2
3
4
5
6
7
8
public void markApprovedForCatalog(boolean approved) {
    if(approved) {
        this.releaseStatus = ReleaseStatus.Yes;
        this.docAuthors = Arrays.asList(ROLE_ADMIN);
    } else {
        this.releaseStatus = ReleaseStatus.No;
    }
}

Then projectReleases.save(release) will store the document in the NSF, throwing an exception in the case of any validation failures. Like with the @NotEmpty parameter annotation above, I don't have to worry about handling that explicitly: Jakarta NoSQL will handle that implicitly for me, since it works with the Bean Validation spec the same way JAX-RS does.

Next Components

Next time I write about this, I figure I'll go over the specific NoSQL entities I've set up and discuss how they handle data access for the app. That will be similar to a number of my recent posts, but I think it'll be helpful to have an example of using that in practice rather than just talking about it hypothetically.

Rewriting The OpenNTF Site With Jakarta EE, Part 1

Jun 19, 2022, 10:13 AM

Tags: jakartaee java
  1. Rewriting The OpenNTF Site With Jakarta EE, Part 1
  2. Rewriting The OpenNTF Site With Jakarta EE: REST
  3. Rewriting The OpenNTF Site With Jakarta EE: Data Access
  4. Rewriting The OpenNTF Site With Jakarta EE: Beans

The design for the OpenNTF home page has been with us for a little while now and has served us pretty well. It looks good and covers the bases it needs to. However, it's getting a little long in the tooth and, more importantly, doesn't cover some capabilities that we're thinking of adding.

While we could potentially expand the current one, this provides a good opportunity for a clean start. I had actually started taking a swing at this a year and a half ago, taking the tack that I'd make a webapp and deploy it using the Domino Open Liberty Runtime. While that approach would put all technologies on the table, it'd certainly be weirder to future maintainers than an app inside an NSF (at least for now).

So I decided in the past few weeks to pick the project back up and move it into an NSF via the XPages Jakarta EE Support project. I can't say for sure whether I'll actually complete the project, but it'll regardless be a good exercise and has proven to be an excellent way to find needed features to implement.

I figure it'll also be useful to keep something of a travelogue here as I go, making posts periodically about what I've implemented recently.

The UI Toolkit

The original form of this project used MVC and JSP for the UI layer. Now that I was working in an NSF, I could readily use XPages, but for now I've decided to stick with the MVC approach. While it will make me have to solve some problems I wouldn't necessarily have to solve otherwise (like file uploads), it remains an extremely-pleasant way to write applications. I am also not constrained to this: since the vast majority of the logic is in Java beans and controller classes, switching the UI front-end would not be onerous. Also, I could theoretically mix JSP, JSF, XPages, and static HTML together in the app if I end up so inclined.

In the original app (as in this blog), I made use of WebJars to bring in JavaScript dependencies, namely Hotwire Turbo to speed up in-site navigation and use Turbo Frames. Since the NSF app in Designer doesn't have the Maven dependency mechanism the original app did, I just ended up copying the contents of the JAR into WebContent. That gave me a new itch to scratch, though: I'd love to be able to have META-INF/resources files in classpath JARs picked up by the runtime and made available, lowering the number of design elements present in the NSF.

The Data Backend

The primary benefit of this project so far has been forcing me to flesh out the Jakarta NoSQL driver in the JEE support project. I had kind of known hypothetically what features would be useful, but the best way to do this kind of thing is often to work with the tool until you hit a specific problem, and then solve that. So far, it's forced me to:

  • Implement the view support in my previous post
  • Add attachment support for documents, since we'll need to upload and download project releases
  • Improve handling of rich text and MIME, though this also has more room to grow
  • Switched the returned Streams from the driver to be lazy loading, meaning that not all documents/entries have to be read if the calling code stops reading the results partway through
  • Added the ability to use custom property types with readers/writers defined in the NSF

Together, these improvements have let me have almost no lotus.domino code in the app. The only parts left are a bean for formatting Notes-style names (which I may want to make a framework service anyway) and a bean for providing access to the various associated databases used by the app. Not too shabby! The app is still tied to Domino by way of using the Domino-specific extensions to JNoSQL, but the programming model is significantly better and the amount of app code was reduced dramatically.

Next Steps

There's a bunch of work to be done. The bulk of it is just implementing things that the current XPages app does: actually uploading projects, all the stuff like discussion lists, and so forth. I'll also want to move the server-side component of the small "IP Tools" suite I use for IP management stuff in here. Currently, that's implemented as Wink-based JAX-RS resources inside an OSGi bundle, but it'll make sense to move it here to keep things consolidated and to make use of the much-better platform capabilities.

As I mentioned above, I can't guarantee that I'll actually finish this project - it's all side work, after all - but it's been useful so far, and it's a further demonstration of how thoroughly pleasant the programming model of the JEE support project is.

Working Domino Views Into Jakarta NoSQL

Jun 12, 2022, 3:33 PM

A few versions ago, I added Jakarta NoSQL support to the XPages Jakarta EE Support project. For that, I used DQL and QueryResultsProcessor exclusively, since it's a near-exact match for the way JNoSQL normally goes things and QRP brought the setup into the realm of "good enough for the normal case".

However, as I've been working on a project that puts this to use, the limitations have started to hold me back.

The Limitations

The first trouble I ran into was the need to list, for example, the most recent 20 of an entity. This is something that QRP took some steps to handle, but it still has to build the pseudo-view anew the first time and then any time documents change. This gets prohibitively expensive quickly. In theory, QRP has enough flexibility to use existing views for sorting, but it doesn't appear to do so yet. Additionally, its "max entries" and "max documents" values are purely execution limits and not something to use to give a subset report: they throw an exception when that many entries have been processed, not just stop execution. For some of this, one can deal with it when manually writing the DQL query, but the driver doesn't have a path to do so.

The second trouble I ran into was the need to get a list composed of multiple kinds of documents. This one is a limitation of the default idiom that JNoSQL uses, where you do queries on named types of documents - and, in the Domino driver, that "type" corresponds to Form field values.

The Uncomfortable Solution

Thus, hat in hand, I returned to the design element I had hoped to skim past: views. Views are an important tool, but they are way, way overused in Domino, and I've been trying over time to intentionally limit my use of them to break the habit. Still, they're obviously the correct tool for both of these jobs.

So I made myself an issue to track this and set about tinkering with some ways to make use of them in a way that would do what I need, be flexible for future needs, and yet not break the core conceit of JNoSQL too much. My goal is to make almost no calls to an explicit Domino API, and so doing this will be a major step in that direction.

Jakarta NoSQL's Extensibility

Fortunately for me, Jakarta NoSQL is explicitly intended to be extensible per driver, since NoSQL databases diverge more wildly in the basics than SQL databases tend to. I made use of this in the Darwino driver to provide support for stored cursors, full-text search, and JSQL, though all of those had the bent of still returning full documents and not "view entries" in the Domino sense.

Still, the idea is very similar. Jakarta NoSQL encourages a driver author to write custom annotations for repository methods to provide hints to the driver to customize behavior. This generally happens at the "mapping" layer of the framework, which is largely CDI-based and gives you a lot of room to intercept and customize requests from the app-developer level.

Implementation

To start out with, I added two annotations you can add to your repository methods: @ViewEntries and @ViewDocuments. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@RepositoryProvider("blogRepository")
public interface BlogEntryRepository extends DominoRepository<BlogEntry, String> {
    public static final String VIEW_BLOGS = "vw_Content_Blogs"; //$NON-NLS-1$
    
    @ViewDocuments(value=VIEW_BLOGS, maxLevel=0)
    Stream<BlogEntry> findRecent(Pagination pagination);
    
    @ViewEntries(value=VIEW_BLOGS, maxLevel=0)
    Stream<BlogEntry> findAll();
}

The distinction here is one of the ways I slightly break the main JNoSQL idioms. JNoSQL was born from the types of databases where it's just as easy to retrieve the entire document as it is to retrieve part - this is absolutely the case in JSON-based systems like Couchbase (setting aside attachments). However, Domino doesn't quite work that way: it can be significantly faster to fetch only a portion of a document than the data from all items, namely when some of those items are rich text or MIME.

The @ViewEntries annotation causes the driver to consider only the item values found in the entries of the view it's referencing. In a lot of cases, this is all you'll need. When you set a column in Designer to be just directly an item value from the documents, the column is by default named with the same name, and so a mapped entity pulled from this column can have the same fields filled in as from a document. This does have the weird characteristic where objects pulled from one method may have different instance values from the "same" objects from another method, but the tradeoff is worth it.

@ViewDocuments, fortunately, doesn't have this oddity. With that annotation, documents are processed in the same way as with a normal query; they just are retrieved according to the selection and order from the backing view.

Using these capabilities allowed me to slightly break the JNoSQL idiom in the other way I needed: reading unrelated document types in one go. For this, I cheated a bit and made a "document" type with a form name that doesn't correspond to anything, and then made the mapped items based on the view name. So I created this entity class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@Entity("ProjectActivity")
public class ProjectActivity {
    @Column("$10")
    private String projectName;
    @Column("Entry_Date")
    private OffsetDateTime date;
    @Column("$12")
    private String createdBy;
    @Column("Form")
    private String form;
    @Column("subject")
    private String subject;

    /* snip */
}

As you might expect, no form has a field named $10, but that is the name of the view column, and so the mapping layer happily populates these objects from the view when configured like so:

1
2
3
4
5
@RepositoryProvider("projectsRepository")
public interface ProjectActivityRepository extends DominoRepository<ProjectActivity, String> {
    @ViewEntries("AllbyDate")
    Stream<ProjectActivity> findByProjectName(@ViewCategory String projectName);
}

These are a little weird in that you wouldn't want to save such entities lest you break your data, but, as long as you keep that in mind, it's not a bad way to solve the problem.

Future Changes

Since this implementation was based on fulfilling just my immediate needs and isn't the result of careful consideration, it's likely to be something that I'll revisit as I go. For example, that last example shows the third custom annotation I introduced: @ViewCategory. I wanted to restrict entries to a category that is specified programmatically as part of the query, and so annotating the method parameter was a great way to do that. However, there are all sorts of things one might want to do dynamically when querying a view: setting the max level programmatically, specifying expand/collapse behavior, and so forth. I don't know yet whether I'll want to handle those by having a growing number of parameter annotations like that or if it would make more sense to consolidate them into a single ViewQueryOptions parameter or something.

I also haven't done anything special with category or total rows. While they should just show up in the list like any other entry, there's currently nothing special signifying them, and I don't have a way to get to the note ID either (just the UNID). I'll probably want to create special pseudo-items like @total or @category to indicate their status.

There'll also no doubt be a massive wave of work to do when I turn this on something that's not just a little side project. While I've made great strides in my oft-mentioned large client project to get it to be more platform-independent, it's unsurprisingly still riven with Domino API references top to bottom. While I don't plan on moving it anywhere else, writing so much code using explicit database-specific API calls is just bad practice in general, and getting this driver to a point where it can serve that project's needs would be a major sign of its maturity.

Per-NSF-Scoped JWT Authorization With JavaSapi

Jun 4, 2022, 10:35 AM

Tags: domino dsapi java
  1. Poking Around With JavaSapi
  2. Per-NSF-Scoped JWT Authorization With JavaSapi

In the spirit of not leaving well enough alone, I decided the other day to tinker a bit more with JavaSapi, the DSAPI peer tucked away undocumented in Domino. While I still maintain that this is too far from supported for even me to put into production, I think it's valuable to demonstrate the sort of thing that this capability - if made official - would make easy to implement.

JWT

I've talked about JWT a bit before, and it was in a similar context: I wanted to be able to access a third-party API that used JWT to handle authorization, so I wrote a basic library that could work with LS2J. While JWT isn't inherently tied to authorization like this, it's certainly where it's found a tremendous amount of purchase.

JWT has a couple neat characteristics, and the ones that come in handy most frequently are a) that you can enumerate specific "claims" in the token to restrict what the token allows the user to do and b) if you use a symmetric signature key, you can generate legal tokens on the client side without the server having to generate them. "b" there is optional, but makes JWT a handy way to do a quick shared secret between servers to allow for trusted authentication.

It's a larger topic than that, for sure, but that's the quick and dirty of it.

Mixing It With An NSF

Normally on Domino, you're either authenticated for the whole server or you're not. That's usually fine - if you want to have a restricted account, you can specifically grant it access to only a few NSFs. However, it's good to be able to go more fine-grained, restricting even powerful accounts to only do certain things in some contexts.

So I had the notion to take the JWT capability and mix it with JavaSapi to allow you to do just that. The idea is this:

  1. You make a file resource (hidden from the web) named "jwt.txt" that contains your per-NSF secret.
  2. A remote client makes a request with an Authorization header in the form of Bearer Some.JWT.Here
  3. The JavaSapi interceptor sees this, checks the target NSF, loads the secret, verifies it against the token, and authorizes the user if it's legal

As it turns out, this turned out to be actually not that difficult in practice at all.

The main core of the code is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public int authenticate(IJavaSapiHttpContextAdapter context) {
    IJavaSapiHttpRequestAdapter req = context.getRequest();

    // In the form of "/foo.nsf/bar"
    String uri = req.getRequestURI();
    String secret = getJwtSecret(uri);
    if(StringUtil.isNotEmpty(secret)) {
        try {
            String auth = req.getHeader("Authorization"); //$NON-NLS-1$
            if(StringUtil.isNotEmpty(auth) && auth.startsWith("Bearer ")) { //$NON-NLS-1$
                String token = auth.substring("Bearer ".length()); //$NON-NLS-1$
                Optional<String> user = decodeAuthenticationToken(token, secret);
                if(user.isPresent()) {
                    req.setAuthenticatedUserName(user.get(), "JWT"); //$NON-NLS-1$
                    return HTEXTENSION_REQUEST_AUTHENTICATED;
                }
            }
        } catch(Throwable t) {
            t.printStackTrace();
        }
    }

    return HTEXTENSION_EVENT_DECLINED;
}

To read the JWT secret, I used IBM's NAPI:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
private String getJwtSecret(String uri) {
    int nsfIndex = uri.toLowerCase().indexOf(".nsf"); //$NON-NLS-1$
    if(nsfIndex > -1) {
        String nsfPath = uri.substring(1, nsfIndex+4);
        
        try {
            NotesSession session = new NotesSession();
            try {
                if(session.databaseExists(nsfPath)) {
                    // TODO cache lookups and check mod time
                    NotesDatabase database = session.getDatabase(nsfPath);
                    database.open();
                    NotesNote note = FileAccess.getFileByPath(database, SECRET_NAME);
                    if(note != null) {
                        return FileAccess.readFileContentAsString(note);
                    }
                }
            } finally {
                session.recycle();
            }
        } catch(Exception e) {
            e.printStackTrace();
        }
    }
    return null;
}

And then, for the actual JWT handling, I use the auth0 java-jwt library:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
public static Optional<String> decodeAuthenticationToken(final String token, final String secret) {
	if(token == null || token.isEmpty()) {
		return Optional.empty();
	}
	
	try {
		Algorithm algorithm = Algorithm.HMAC256(secret);
		JWTVerifier verifier = JWT.require(algorithm)
		        .withIssuer(ISSUER)
		        .build();
		DecodedJWT jwt = verifier.verify(token);
		Claim claim = jwt.getClaim(CLAIM_USER);
		if(claim != null) {
			return Optional.of(claim.asString());
		} else {
			return Optional.empty();
		}
	} catch (IllegalArgumentException | UnsupportedEncodingException e) {
		throw new RuntimeException(e);
	}
}

And, with that in place, it works:

JWT authentication in action

That text is coming from a LotusScript agent - as I mentioned in my original JavaSapi post, this authentication is trusted the same way DSAPI authentication is, and so all elements, classic or XPages, will treat the name as canon.

Because the token is based on the secret specifically from the NSF, using the same token against a different NSF (with no JWT secret or a different one) won't authenticate the user:

JWT ignored by a different endpoint

If we want to be fancy, we can call this scoped access.

This is the sort of thing that makes me want JavaSapi to be officially supported. Custom authentication and request filtering are much, much harder on Domino than on many other app servers, and JavaSapi dramatically reduces the friction.