Introduction to Claude Code

Claude Code is a software development agent which runs on your local machine. My experience is that when people first start using Claude Code, they have a bad experience with it, as it can be too easy to assume that you don’t need any knowledge to be productive. Beginners tend to write very short, “one shot” prompts, and then get surprised when they don’t get a good result. As an example, I have a million line plus project, where I want upgrade from Mockito 3 to Mockito 5. A really *bad* way to use Claude Code would be to:

  • Do no prep
  • Write an insanely short, one shot prompt, like “upgrade this project from mockito 3 to 5”
    • this is what beginners tend to do. But actually, I think there is quite learning curve to being productive with Claude Code. This post is an attempt to give newcomers some more structured guidance. The topics I’ll cover will be:

      1. Using the help and understanding the different Claude Code commands
      2. The importance of CLAUDE.md and other docs.
      3. Using MCP servers to extend Claude, specifically:
        1. Querying git, Jira, Confluence etc
        2. Using Chrome
        3. Querying a database
        4. Getting API docs with context7

      Once I’ve covered those topics, I will go through some examples of good and bad prompts, and what I consider a standard workflow. (No point doing this until you understand the basics.)

      Installation and starting Claude Code

      Claude Code installs using NodeJS. Install instructions here:

      Claude Code setup

      Once you have it installed, you start Claude Code from the base of the project you want to work on by typing “claude”. This starts Claude in normal mode, where it can perform read only actions, but will ask you for permission to do anything more. Claude support four modes, which you can cycle through using SHIFT + TAB. They are:

      • Normal
      • Accept Edits
      • Plan Mode – zero write permissions
      • Bypass Permissions or “dangerous” mode

      I prefer to run in dangerous mode, so I have set up an alias in my Mac .zshrc:

      alias claude-power=claude --dangerously-skip-permissions

      To me, one of the most important things with any new tool is understanding how to acccess the help. With Claude, you do this by typing /help. This gives you a set of three help pages which you can tab through. I’m not going to go through every command, but will highlight the ones I think are important. Basic commands:

      • @ – allows you to add any file to Claude’s context – this is very commonly used if you have written a plan or other notes in a previous session, and want to load them back in
      • ! – bash mode – allows you to run a terminal command without dropping out of the Claude prompt
      • # – to memoize. i.e. add to CLAUDE.md

      Most commands in Claude are slash commands. Ones to highlight:

      /context - Show current context usage
      /compact - Compact your context. You'll use this after you have done a planning session and want to compact your context to just include the necessary info.
      /clear - Clear context.
      /export - Export context to a file.
      /add-dir - Add another directory to Claude - useful if you have a task across multiple repos.

      Note that you can also use Claude to help you use Claude! But you have to know the right way to do it! Claude will NOT search the internet or its own docs unless you tell it to. Suppose you asked Claude to write some code and it messed up. Simply typing “how could I have written that better” will only search the current LLM model, which could be several months old, so won’t even know about recent Claude features. Whereas typing “search the claude docs and the internet and tell me how I could have avoided you using a mock inside an integration test” is a far more precise question that will get Claude to access its most current docs.

      CLAUDE.md and other docs

      When people first start using Claude, they often give it a prompt and don’t understand why it gets something wrong. Perhaps the most common reason is because they have not provided enough background information. You and fellow developers may have worked on your codebase for many years. Claude is coming to it new. You can and should help it out by providing docs. The first step is to write a CLAUDE.md file. You can actually generate this using Claude and the /init command, but you absolutely should review it, and continue to improve it over time.

      For a large project, your CLAUDE.md could become pretty big. This actually creates the opposite problem. Because CLAUDE.md is loaded automatically every time you run Claude, you could be confusing it with too much irrelevant information. So for a large project, you should break down your docs into multiple markdown files. There are different ways to do this. Some teams like to have a CLAUDE.md in the root of different modules. Whilst I understand the argument that the docs are closer to the code they pertain to, for my team, I prefer to have a docs folder with the files named as per the topic they cover. This makes it easier to see what areas we already have covered. So, for example, we have five markdown docs for tests – one for each kind of test we have in our repo. Our top level CLAUDE.md explains the five different kinds of test, and points to the docs for each one, so Claude can dynamically load the info if it needs it.

      Extending Claude with MCP servers

      The MCP server protocol is an open standard for agent communication. It allows you to connect an agent to numerous different tools. You can configure MCP servers in the claude files for a project, or in your global claude config. Once added, you can use the /mcp command to administer them. MCP servers can use a lot of context, so you should disable them all by default, and then just enable the one(s) you want to use for a particular task. The Claude docs have an intro to MCP here:

      https://code.claude.com/docs/en/mcp

      Querying git, Jira, Confluence etc

      We use the Atlassian suite, so the Atlassian MCP allows us to connect to git, Jira and our Confluence docs:

      https://support.atlassian.com/atlassian-rovo-mcp-server/docs/getting-started-with-the-atlassian-remote-mcp-server/

      If you are using GitHub or GitLab, you can configure their MCP servers for similar connectivity. Claude won’t use an MCP server unless you tell it to. For most tasks I work on, I probably want Claude to search our wiki before it writes a plan. So I’ll explicitly say “use the atlassian mcp server to search our confluence wiki” as part of my prompt.

      Using Chrome with Chrome DevTools

      https://developer.chrome.com/blog/chrome-devtools-mcp

      Sample use cases for Chrome integration could include: starting your app and performing a quick test once some work is done, or helping to write Selenium or Playwright tests for the app by inspecting your browser DOM

      Querying a database with DBHUB

      https://github.com/bytebase/dbhub
      Add this with:

      claude mcp add --transport stdio db -- npx -y @bytebase/dbhub --dsn "sqlserver://username:password@host.com:1433/your_db_server"

      You might wonder why you would need to bother connecting Claude to a db? You already know how to run queries, and you will have a SQL client, so what is the benefit? Well, remember how earlier I said you should be adding plenty of docs to your repo – you can add as much info as you want about the DB structure and useful queries. Then a new developer doesn’t have to spend time figuring out how to construct a query, they can ask Claude, and Claude can use your docs to understand the structure and assist with writing the query.

      Getting up to date docs with Context7

      https://github.com/upstash/context7

      Especially useful for tasks like upgrading from one version of a library to another, where you need to understand all of the changes.

      Workflow

      Now you understand all of the basic functionality of Claude, we can revisit workflow. The worst workflow is a terse, one shot prompt. Earlier I gave my own example “upgrade from mockito 3 to 5”. Given all of the above, a better way to start this task might be a prompt like this:

      I want to upgrade this project from mockito 3 to 5. please search the internet, and use the atlassian mcp server to search our confluence wiki, and use the context7 mcp server to get the latest mockito docs, to help me write a plan to do this. the plan should clearly list all changes between mockito version 3 and 5. in particular, it should highlight changes that will cause compilation failures, such as classes that have been removed, methods removed or classes moved to different packages. separately it should highlight any runtime behaviour changes. for any runtime behaviour change, please search our code and tell me if we will be affected and if so, what changes to our code will be required. I want to make the minimal amount of changes possible, so if functionality is deprecated, but not removed yet, please highlight in the plan, but the plan should state it will not be updated.

      Writing the plan is unlikely to be a one step process. Claude will come back with an initial plan which you can review. You can tell it to remove things, or highlight things it has missed that need to be assessed and added. Then once you are happy with the plan, you can write it to a markdown doc, then use /clear to clear your context, disable unnecessary MCP servers, to free up space, and then use @ to reload this plan into the context. You now have as much free context as possible to actually execute the plan.

      Wrap up

      I hope this has given you a good overview of the basics of Claude Code setup and workflow and you will enjoy learning Claude and gradually refining your own workflow. There are several topics I haven’t covered yet, such as skills, sub agents and plugins. Time permitting, I will cover these in a future post!

Posted in AI, Claude Code | Tagged , | Leave a comment

Gradle properties

Using properties in Gradle can be confusing for a newcomer, because there are different sorts of properties, and some complexities in how they apply to a Gradle build. Firstly, we can use three different types of property:
  1. Gradle properties – specified with -P flag.
  2. Java system properties – specified with -D.
  3. Environment variables.
Gradle -P properties? What are these? Let’s take a look.

Gradle -P properties

You can use Gradle properties on the command line to conditionally change a build. These properties are specified with a -P flag. Do NOT confuse them with java system properties which are set with the -D flag! These properties are the correct choice for making parts of your build conditional. e.g. enable / disable tests like this:
test {
    // disable tests by default
    onlyIf { project.hasProperty('functionalTests') && project.property('functionalTests') == 'true' }
}

Setting properties for tests

Neither -P nor -D properties will be passed to tests by default, as the tests run in a forked JVM which does not get any of the properties of the main JVM.

To pass properties to the forked JVM used for tests, you must either “forward” them (for properties you are expecting to be dynamic, which are specified on the command line), or just set them directly in the test configuration for either JUnit or TestNG.

To forward the value of a system property set on the command line, in your project, in your test configuration block, simply use the systemProperty call to set the value of the system property, from the value of the system property supplied to the main JVM, like this:

test {
    // forward properties to the forked JVM
    systemProperty "docker.username", System.getProperty("docker.username")
    systemProperty "docker.password", System.getProperty("docker.password")
}
Posted in Gradle | Leave a comment

Improving Java build speed with Develocity

We have recently started using the Develocity tool for our builds. I really love it. It is a build acceleration tool made by the company who create Gradle. It was previously called Gradle Enterprise, but has been renamed to make clear that it works for both Maven and Gradle. It offers these features:

  1. Metrics for every build, showing what tasks were performed and how long each one took.
  2. Remote caching.
  3. Test distribution
  4. Predictive test selection. e.g. only run reduced test set on feature branches
  5. Test Failure Analytics e.g. dashboard, flaky test auto retrys

It works for BOTH CI and local builds. So if code has previously built on your CI server and the results from each task stored in the remote cache, local dev builds will not have to repeat all tasks! However work may be required for remote caching to be available – tasks results can only be cached if the task has correctly defined inputs. But using Develocity makes it a lot easier to check your tasks and understand if they need changes to make them cacheable.

The front page of Develocity shows your list of builds (click images to expand):

Click on a build to see a summary:

Then you can see a list of tasks, which you can order by longest task, so you can start optimising:

The performance tabs show you cache usage, so you can figure out if tasks aren’t being retrieved from cache correctly:

If you look at the build list, you will see there is a really nice example of a build running quickly because most tasks were retrieved from cache:

Second build from cache

The first build took 28 minutes, the second build, which only had to rebuild a couple of modules, took 2 minutes!

For more info on Develocity, see:

https://gradle.com/develocity

Or if you would like to read more about how to fix a Gradle task that cannot be cached, see the post I wrote when we were implementing Develocity. For tasks to be cacheable, they first have to be incremental – have correctly defined inputs and outputs:

Gradle incremental tasks and builds

Posted in Gradle, Java | Leave a comment

CXF Restful web server example

I’ve created an example of how to use Apache CXF and Spring together to create a restful web service:
https://github.com/hedleyproctor/cxf-restful-server-example

This example shows the key steps in creating a restful web server:

  • Create a rest service interface class which you annotate with the RS annotation @WebService.
  • Your rest service interface class contains a method signature for each operation. Each one is annotated to say what its path is and the data format for request and response.
  • You write an implementation class that contains the code to be executed when each endpoint is hit.
  • In Spring XML configuration, you define the server endpoint, configuring it with your rest service interface(s) and any necessary data conversion classes.

So in my example, the rest interface looks like this:

 
package org.example;

import com.ice.fraud.Claim;

import javax.ws.rs.*;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.jws.WebService;

@Path("/hello")
@WebService
public interface HelloWorldRestService
{
    @GET
    @Path("/greet")
    @Produces(MediaType.TEXT_PLAIN)
    public Response greet();

    @POST
    @Path("/sayhello")
    @Produces(MediaType.APPLICATION_JSON)
    public Response sayHello(String input);

    @POST
    @Path("/submit")
    @Consumes(MediaType.APPLICATION_JSON)
    @Produces(MediaType.APPLICATION_JSON)
    public Response submit(Claim claim);
}

Then each method is implemented in the implementation class, to return the appropriate data and an http response code:

 
public class HelloWorldRestServiceImpl implements HelloWorldRestService
{
    public Response greet() {

        return Response.status(Status.OK).
                entity("Hi There!!").
                build();
    }

The server definition is in a Spring xml configuration file called cxf-beans.xml. You give CXF a list of all your interface classes, and what data providers you need to use. In this example, the data format is json, so the data provider is the JacksonJsonProvider class.

 
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:cxf="http://cxf.apache.org/jaxrs"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://cxf.apache.org/jaxrs
       http://cxf.apache.org/schemas/jaxrs.xsd">

    <import resource="classpath:META-INF/cxf/cxf.xml" />
    <import resource="classpath:META-INF/cxf/cxf-servlet.xml" />

    <bean id="helloWorldRestService" class="org.example.HelloWorldRestServiceImpl" />

    <!-- By default, when CXF starts a server it starts an instance of the Jetty java web server. -->
    <!-- The endpoint for a CXF operation is composed of three sections: -->
    <!-- 1. base URL - defined here -->
    <!-- 2. service path - defined by the @Path annotation at the top of your service class -->
    <!-- 3. operation path - defined by the @Path annotation on each method -->
    <cxf:server id="helloServer" address="http://localhost:8080/cxf-rest">
        <cxf:serviceBeans>
            <ref bean="helloWorldRestService" />
        </cxf:serviceBeans>
        <cxf:providers>
            <bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
        </cxf:providers>
    </cxf:server>

</beans>

The repo also contains an integration service written in JUnit 5 which shows how to test the services using an Apache http client.

For other Java integration topics, see:
Error handling in Apache Camel
Type conversion exceptions in Apache Camel

Posted in CXF, Java | Tagged , | Leave a comment

IntelliJ Plugin Development Cookbook

This post is intended to help people writing IntelliJ plugins, with a focus on plugins for Java projects. Although the IntelliJ docs are pretty good, this post collects together lots of small “how to” instructions that I’ve found useful when writing a custom plugin.

Introduction

IntelliJ Plugin Development docs: https://www.jetbrains.org/intellij/sdk/docs/welcome.html https://www.jetbrains.org/intellij/sdk/docs/basics/getting_started.html

It is really important to understand the different ways you can interact with files:

  1. As VirtualFiles, which confusingly, are the files on the filesystem.
  2. As structured, understood files, called PSI files. e.g. a Java file is a PSI file
  3. As files open in the Editor.
  4. XML files
IntelliJ file docs:
https://www.jetbrains.org/intellij/sdk/docs/basics/architectural_overview/virtual_file.html https://www.jetbrains.org/intellij/sdk/docs/basics/architectural_overview/documents.html
https://www.jetbrains.org/intellij/sdk/docs/reference_guide/editors.html

Structured files like Java or XML are called PSI:
https://www.jetbrains.org/intellij/sdk/docs/basics/architectural_overview/psi.html
https://www.jetbrains.org/intellij/sdk/docs/basics/psi_cookbook.html

How to add an action to a menu

In your plugin.xml:
 
<actions>
    <action id="com.ice.refactor.RemoveFacadeUsageAction" class="com.ice.refactor.RemoveFacadeUsageAction" text="Remove Facade Usage"
            description="Remove facade usage">
        <add-to-group group-id="RefactoringMenu4" anchor="last"/>
    </action>
However by far the easiest way to do this is to use the custom inspection / quick fix – if you have an action not yet added to the plugin config, right click on the class name and IntelliJ will open a dialog box to help you add the config. Really cool!

How to add settings to your plugin

See this useful post: Adding plugin settings

Essentially:

  • Create a Configuration class
  • Create a Form class that it will bind to
  • Use the Visual Editor to design the form
  • Use the PropertiesComponent.getInstance() to save simple properties like booleans and strings

Find the PSI element under the carat

  
final Editor editor = event.getData(CommonDataKeys.EDITOR);
int findElementFlags = TargetElementUtil.REFERENCED_ELEMENT_ACCEPTED + TargetElementUtil.ELEMENT_NAME_ACCEPTED + TargetElementUtil.LOOKUP_ITEM_ACCEPTED;
TargetElementUtil targetElementUtil = new TargetElementUtil();
PsiElement psiElement = targetElementUtil.findTargetElement(editor, findElementFlags, offset);

Getting the text under the carat

  
final Editor editor = event.getData(CommonDataKeys.EDITOR);
int offset = editor.getCaretModel().getOffset();
Document document = editor.getDocument();
String textUnderCarat = document.getText(TextRange.from(offset - 10, offset + 30));

Getting the VirtualFile from the Document in the editor

 
VirtualFile virtualFile = FileDocumentManager.getInstance().getFile(document);

See: Virtual file

Find classes implementing an interface

 
PsiClass interfaceClass = (PsiClass)psiElement;
PsiElement psiImplClass = DefinitionsScopedSearch.search(interfaceClass).findFirst();
PsiClass facadeImplementationClass = (PsiClass)psiImplClass;

Find usages of a method

Collection usages = ReferencesSearch.search(myMethod).findAll();

Find children of a PSI element

PsiTreeUtil.findChildOfType

Add an annotation to a Java class

 
javaPsiFacade = JavaPsiFacade.getInstance(event.getProject());
PsiElementFactory elementFactory = javaPsiFacade.getElementFactory();
PsiAnnotation inputAnnotation = elementFactory.createAnnotationFromText(annotationText, psiClass);
// Note that we pass in the desired annotation, but another java object is actually created and added to the target. 
// This is what we must pass back to our caller. 
PsiAnnotation actualAnnotation = (PsiAnnotation)targetLocation.addAfter(inputAnnotation, targetLocation);

Show a message / error dialog

Messages.showMessageDialog(project, “Element under carat is not a Java class name”, “Refactoring Plugin”, Messages.getErrorIcon());

Permit message dialogs in tests

TestDialogManager.setTestDialog(TestDialog.OK);

Write messages to the event log

Notifications.Bus.notify(newNotification(“your-plugin-group”,”YourActionName”,message,NotificationType.INFORMATION));

Load an XML file

Simply load as a PsiFile and then cast to XmlFile.
Posted in IntelliJ, Java, Uncategorized | Tagged , | Leave a comment

Gradle dependencies tutorial

Gradle has very powerful dependency management features. In this tutorial I will walk through creating a multi module Java project, and explain:
  • How the api and implementation dependencies work
  • How to create a custom dependency resolution strategy to:
    1. Hard fail if an unwanted dependency is found
    2. Fix a dependency version
    3. Globally exclude a dependency

I will use IntelliJ for this tutorial. Start by creating a new Gradle project. I’m using Groovy as the Gradle DSL language, and the Gradle wrapper:

The Gradle wrapper means that the project includes a small jar that will bootstrap the build process. You don’t need a version of Gradle installed, rather the build will download the correct version. In my project, IntelliJ has generated the wrapper using version 7.6 of Gradle. I want to use a more recent version, so you can open the file gradle/wrapper/gradle-wrapper.properties and change the distributionUrl to a later version. I’m using version 8.6.

api and implementation dependency configurations

Next create two sub projects. I’ll call these “data-access” and “service”. The idea is to simulate a multi module project, representing a layered application, where one module performs data access, using Hibernate, and the other module is the service layer, which has no direct knowledge of the Hibernate data access layer.

If you have added the sub-projects using IntelliJ, they should automatically be added to the settings.gradle file at the root of the project, but you can open this file and check.

In the data-access project, add the java-library plugin to the build.gradle file:

plugins {
    id 'java-library'
}
Now in the dependencies section, let’s add Spring and Hibernate:
dependencies {
    api 'org.springframework:spring-core:6.0.11'
    api 'org.springframework:spring-context:6.0.11'
    implementation 'org.hibernate.orm:hibernate-core:6.5.2.Final'
We’ve added Spring as an api dependency, and Hibernate as an implementation dependency, the difference being:
  • api – also on the classpath of later modules
  • implementation – will be packaged into the final application, but is not on the build classpath for later modules
I think this is a great feature of Gradle, so much more powerful than Maven. It allows you to prevent later modules from being polluted by dependencies added for earlier modules. Let’s test out the access to these dependencies. In your data-access module, you can add a class that uses both libraries:
import org.hibernate.SessionFactory;
import org.springframework.context.ApplicationContext;

public class CustomerDAO {

    public void getCustomer(Long id) {
        SessionFactory sessionFactory = null;
        ApplicationContext applicationContext = null;
    }
}
Now build your project and confirm it works. (In the right nav, from your Gradle tab, you should be able to see Tasks -> build -> build.)

Now let’s add a class in the service project, and confirm that we can see Spring, but not Hibernate. In the service project build.gradle, add a dependency on the data-access project:

dependencies {
    implementation(project(':data-access'))
Now let’s try and add a class to the service project which uses both Spring and Hibernate:
import org.hibernate.SessionFactory;
import org.springframework.context.ApplicationContext;

public class CustomerService {

    public void processCustomer(Long id) {
        ApplicationContext applicationContext = null;
        SessionFactory sessionFactory = null;
    }

}
When you try and build this, you should get an error:
package org.hibernate does not exist
import org.hibernate.SessionFactory;
                    ^
Success! This proves that even though the data-access module is using Hibernate, and the service module depends on this module, the service module cannot use Hibernate code itself. The dependency has not bled into the service module. This is a great way to avoid accidental usage of dependencies from other modules. Now that we have proved this, delete the references to Hibernate from the CustomerService class and confirm the project can build again.

Throwing an error if a dependency is found

For the second part of this tutorial, I want to explain how you can customise dependency resolution. Again, Gradle has far more powerful mechanisms for doing this than Maven does. Firstly, let’s start by understanding what dependencies the project uses. On the command line, you can run:

./gradlew dependencies
This will show a number of different configurations, but most are blank, with only a couple of test configurations having dependencies. What is going on? The answer is that this command has only shown you dependencies for the top level project – not any of the sub projects. To see the dependencies for the data-access project, type:
./gradlew :data-access:dependencies
You should now see a much longer list, with the Spring and Hibernate dependencies included. So for Spring, as well as spring-core and spring-context, the list will show all the transitive dependencies, such as spring-jcl. Suppose we didn’t want spring-jcl, how could we detect it was being used? The answer is to write a custom dependency resolution strategy. In the top level build.gradle file, add the following:
allprojects { Project project ->
    configurations.all {
        println "Configuration: ${name}"
        resolutionStrategy.eachDependency { DependencyResolveDetails details ->
            println "Group: ${details.requested.group} Artifact: ${details.requested.name}"
            if (details.requested.group == 'org.springframework' && details.requested.name == 'spring-jcl') {
                throw new RuntimeException("Don't want spring-jcl")
            }
        }
    }
}
Now try and build your project again. You should get a runtime exception.

Fixing a dependency version

What if you don’t want to hard fail, but rather change the fix the version to one specified by you? We can do that by overriding the version in the custom resolution strategy, so the above code becomes:

allprojects { Project project ->
    configurations.all {
        println "Configuration: ${name}"
        resolutionStrategy.eachDependency { DependencyResolveDetails details ->
            println "Group: ${details.requested.group} Artifact: ${details.requested.name}"
            if (details.requested.group == 'org.springframework' && details.requested.name == 'spring-jcl') {
                details.useVersion '6.0.5'
                details.because 'we need v6.0.5'
            }
        }
    }
}
If you rerun the dependencies commmand for the data-access module, the output should show that the version of spring-jcl has been fixed:
+--- org.springframework:spring-core:6.0.11
|    \--- org.springframework:spring-jcl:6.0.11 -> 6.0.5

Excluding a dependency

What if you simply want to exclude a dependency entirely? In this case, things are simpler. Just use the exclude command in your configurations.all closure:
allprojects { Project project ->
    configurations.all {
        println "Configuration: ${name}"
        exclude group: 'org.springframework', module: 'spring-jcl'
    }
}
You can then rerun the dependencies command and confirm spring-jcl no longer appears in the list.
For more info on Gradle dependencies, see:
https://docs.gradle.org/current/userguide/declaring_dependencies.html
https://docs.gradle.org/current/userguide/dependency_locking.html
https://docs.gradle.org/current/userguide/resolution_strategy_tuning.html

Some of my other posts on Gradle:
Dependencies and configurations in Gradle
Gradle incremental tasks and builds
Gradle Release Plugin
Code coverage with Gradle and Jacoco

Posted in Gradle, Java | Tagged , | Leave a comment

Gradle – working with files

When working with files in Gradle, the key classes are:

FileCollection
FileTree – which extends FileCollection

Getting a FileCollection

You can get a file collection by using the files() method which is always available (from the Project object).

FileCollection myFiles = files("someDirectory")

Getting a filtered list of files

If you want to filter, probably easier to use the FileTree:

FileTree myFiles = fileTree("someDirectory").matching {
	include "*.xsd"
}

Extracting files from a jar / dependency / zip

Sometimes you might need to extract files from a dependency in order to process or consume them. Suppose you have some XSDs in a jar file and you want to extract them. First, define a custom configuration:

configurations {
    myXSDs
}

Then in your dependencies section, assign the jar file to the configuration and then use the zipTree command to unzip the archive:

task unzipXSDs(type: Copy) {
    from zipTree(configurations.myXSDs.singleFile).matching {
        include '**/*.xsd'
        include '**/*.xjb'
    }
    into "$buildDir/myModel"
}

Printing files in a single zip /jar

For debugging, you might want to print out the contents of a zip or jar file. You can do this by adding a custom task that uses the zipTree forEach method, like this:

task printJar {
    doLast {
        println "Printing jar files"
         zipTree('/path/to/jar/my.jar').forEach(f -> println f)
    }
}

For the Gradle docs on file, see:

https://docs.gradle.org/current/userguide/working_with_files.html

The FileCollection and FileTree classes both have good JavaDocs:

FileCollection

FileTree

Some of my other posts on Gradle:

Dependencies and configurations in Gradle

Gradle incremental tasks and builds

Gradle Release Plugin

Code coverage with Gradle and Jacoco

Posted in Gradle, Java | Tagged , | Leave a comment

Debugging Gradle

If you are new to any tool or technology, knowing how to debug when things go wrong is a really important skill. This post gives some beginner tips on how to debug Gradle builds. Note: In the commands below, I’m assuming you are invoking gradle via the gradle wrapper, so all commands start “gradlew”. If you aren’t using the wrapper, this would just be “gradle”.

Logging

By default, Gradle builds run in “quiet” log mode. Use the -i flag to get info level logs, or -d for debug.

Knowing what tasks are available

If you are a beginner, sometimes you don’t even know what tasks are available to you in the current build. Simply run the “tasks” command and it will list every available task.

Running a task in a single module

gradles :module:sub-module:task

Seeing dependencies

gradlew dependencies

This is for the top level module. For a sub module, run the task in that sub module:

gradlew :module:sub-module:task

If you want to add a task to your build that will print the dependencies for all modules, you can do this an an allprojects closure in your top level build.gradle Groovy file:

allprojects {
    task printAllDependencies(type: DependencyReportTask) {}
}

Debugging your own Gradle build scripts

In IntelliJ, you can right click on a Gradle task in the right nav and select the “Debug” option.
Alternatively, you can start Gradle in remote debug by adding:
-Dorg.gradle.debug=true
to the startup properties.

Debugging gradle core

This can be done provided you have the full distribution in use, not just the binary. So in the gradle/gradle-wrapper.properties file, set the distro:

distributionUrl=https\://services.gradle.org/distributions/gradle-7.4.2-all.zip

Debugging a third party plugin

There is currently a bug in IntelliJ whereby it will not find the source for a third party plugin. You can work around this by temporarily adding the plugin as a regular dependency to your app / module.


For Gradle docs on debugging, see:

https://docs.gradle.org/current/userguide/logging.html

https://docs.gradle.org/current/userguide/troubleshooting.html

https://docs.gradle.org/current/userguide/viewing_debugging_dependencies.html

Some of my other posts on Gradle:

Dependencies and configurations in Gradle

Gradle incremental tasks and builds

Gradle Release Plugin

Code coverage with Gradle and Jacoco

Posted in Gradle, Java | Tagged , | Leave a comment

Error handling in Apache Camel

Scenarios

When coding an integration with Apache Camel, we need to be able to deal with many different kinds of error:
  • Bug / error in our own code, before we have communicated with the remote service.
  • Getting a connection to the remote service, but it returning an error response / code.
  • Failing to connect to the remote service.
  • Error inside our error handling code! e.g. you are inside a try catch block
  • Message is retried from DLQ, but a later message has already been sent.
  • Power goes down after message has been picked up.

Coding options available

We have multiple options for handling errors:
  1. Try catch block – for catching errors where we can do something useful. e.g. update a status to failed
  2. On exception – can be used with a retry policy. This will save the current message state and what step failed. It will then block the consumer thread and retry from the failed step when the configured redelivery time is up. This should not be used with long redelivery time periods as the thread is blocked. See https://camel.apache.org/manual/exception-clause.html
  3. Error handler – similar to on exception, can be used with a retry policy that will save message state and failing step, and retry. However with onException you can specify different policies for different exception types. See https://camel.apache.org/manual/error-handler.html
  4. JMS retries. These are configured ActiveMQ rather than Camel. In this case, the message is retried from the start of the route. This is useful if you want to retry after a long redelivery period, like 10 minutes, as the AMQ consumer thread is not blocked. Also, unlike on exception and an error handler, each time the broker retries the message, it increments a header. This means we can write logic to detect when a message has been retried a certain number of times, and then invoke error handling. (See below.)
  5. Filter that if retries have exceeded a threshold and invokes error handling logic. If the message has been retried multiple times, this suggests that it will never succeed. e.g. input data is invalid. We don’t want it to constantly refail back to the DLQ so we can detect the number of retries and then invoke whatever error handling logic is appropriate.
  6. JDBC save points. Within a single transaction you can record save points, and perform a partial rollback to one of these, if an error occurs. The approach you take to errors depends on whether you think the integration module needs to do anything when an error occurs. If there is nothing useful that the module can do, you can permit the message to go straight to the DLQ. If you need to implement some error handling in the module, you can wrap the integration code in a block, with one or more blocks for the error conditions.

On exception

Can be used with a short redelivery period to make the route retry. This is useful in the case of a temporary network problem. You place the config inside your camel context, but outside of the routes.
NOTE: This does NOT retry from the start of the route! Camel saves a copy of the message with all headers and properties, and retries the step that failed!
<onException>
    <exception>java.io.IOException</exception>
    <redeliveryPolicy redeliveryDelay="10000" maximumRedeliveries="5"/>
    <handled>
        <constant>false</constant>
    </handled>
    <log loggingLevel="ERROR" message="Failure processing, giving up with exception: ${exception}"/>
</onException>

Using doCatch and doWhen

Camel actually has a more powerful catch mechanism than plain Java, as you can specify not just an exception type, but additional conditions. e.g.
<doTry>
    <to uri="cxfrs:bean:someCxfClient" pattern="InOut"/>
    <doCatch>
        <exception>org.apache.camel.component.cxf.CxfOperationException</exception>
        <onWhen>
            <simple>${exception.statusCode} == 401</simple>
        </onWhen>
Note you MUST use onWhen here. If you use something like filter or choice you have a code bug – you will catch all exceptions of the specified type, but only handle some of them!

Catching an error, making changes and retrying

This is a common pattern. Consider a route which does the following:
  1. Performs a login.
  2. Stores login credentials in cache.
  3. Connects to remote service.
We always need to consider the possibility that the cached credentials may be invalid. In this case, we need to:
  1. Identify the specific error or return code that says the credentials are invalid.
  2. Clear the cache.
  3. Login again.
  4. Retry the remote call.
You might think that you could catch the error, clear the cache, and then rethrow the exception so that your error handler will retry the message. This won’t work! If you have added the new login details to the message, they will be ignored, because the error handler will be retrying with the saved copy of the message that was the input to the failing step! There are two ways to deal with this problem:
  1. Manual approach. Simply catch the error, make the appropriate changes, then call the remote service again. You may need to set up headers for the remote service, so you might find this easier if you move the code for setting up the headers and making the call to its own route.
  2. Group together the steps you need to be rerun into their own route, and mark this as having no error handler. Then the exception gets propagated back up to the calling route. When this gets handled by an error handler, it will retry from the start of the second route. In the caching example, you would group together getting the auth token from the cache and then making the remote call.

Filter to detect number of retries

When you retry from the AMQ broker, it updates a message header with the retry number. You can use this to detect when a certain number of retries have been attempted, and invoke your error handling code. You should place this filter as close as possible to the top of your route. If not, the code could fail again before hitting the filter, and end up in the DLQ. Sample:
<log id="logRetryCount" loggingLevel="INFO" message="JMS delivery count: ${header.JMSXDeliveryCount}"/>
<filter>
    <simple>$simple{header.JMSXDeliveryCount} > {{jms.max.retries}}</simple>
    <log message="Retry limit has been reached" loggingLevel="INFO"/>
<!-- do your error handling in here, like setting the status to failed, or placing the message on a dedicated failure queue -->
    <stop/>
</filter>
I found in testing that when I tried to set the JMS header, it seemed to be ignored. However you can set it in your test by adding an advice by weaving in extra code. As long as you have a step in the route with an id, you can insert extra code before or after it. In the sample above, we have a log statement just before the filter, so in the test code we have:
AdviceWith.adviceWith(camelContext, "myCamelRoute", a -> {
    a.weaveById("logRetryCount").before()
            .process(exchange -> {
                if (maxRetriesExceeded) {
                    exchange.getIn().setHeader("JMSXDeliveryCount", 6);
                }
            });
});
Setting the header to the retry count only makes sense if the route handles the error. But if you need the message in the DLQ, you would re-throw it. To test this behaviour, just configure a parameter for checking the delivery count and set it to 2 for tests which retry exactly once.

Exceptions and error codes

When throwing exceptions, generally each different exception should have a different message, and potentially a unique error code. This makes it far easier to debug real failures, as you can easily find the section of code which threw the exception.

See Also

Other posts on Camel: Type conversion in Camel
Posted in Camel, Java | Tagged , | Leave a comment

Gradle incremental tasks and builds

One of the things that makes a build efficient is when it can run incrementally. i.e. If you have already run one build, and you change things and run another, the second build should only have to rerun some tasks – where the inputs to that task have changed. Gradle has great support for this. I recently came across an example while migrating a large build from Maven to Gradle. In this build, we have three steps that do the following:
  1. Generate JAXB java classes from XSDs
  2. Precompile these classes plus a small number of other classes
  3. Run a custom annotation processor which will add json annotations to the generated code

The custom annotation task is defined in a separate git repository. It needed a custom classpath. I don’t believe you can change the classpath for normal tasks, but if you use a JavaExec task to run a task in a new JVM, you can obviously configure the classpath as you wish. Hence this is the setup I used. It looked like this:

tasks.register('jacksonAnnotationTask', JavaExec) {

    classpath = sourceSets.main.compileClasspath
    classpath += files("$buildDir/classes/java/generatedSource")
    classpath += configurations.jacksonAnnotationTaskClasspath

    mainClass = 'com.ice.integration.gradle.JacksonAnnotationTask'

    args = ["$buildDir/generated/jaxb/java/claimsApiModelClasses", "com"]
}

These steps all happen before the main compile. When I did a compile, then repeated it, I was disappointed to see that the custom annotation task was rerun. What was going on?

How do you see what is going on with a Gradle build? The easiest thing to do is rerun with the -d debug flag. Once I did this, the problem was obvious – the task was rewriting the generated source files in place – therefore the inputs to the task had changed, therefore the task had to be rerun. Once I understood this, the route to fix it is clear – the task should output the updated files in a new location. I updated the task code to do this, adding a third parameter to specify the output directory. Then I updated the JavaExec config to specify the input and output, like this:

tasks.register('jacksonAnnotationTask', JavaExec) {
    // we must declare inputs and outputs
    // otherwise Gradle will just rerun this task every time
    String outputDirectory = "$buildDir/generated/jaxb/java/claimsApiModelClassesWithJsonAnnotations"
    inputs.files(sourceSets.generatedSource.java)
    outputs.dir(outputDirectory)

    classpath = sourceSets.main.compileClasspath
    classpath += files("$buildDir/classes/java/generatedSource")
    classpath += configurations.jacksonAnnotationTaskClasspath

    mainClass = 'com.ice.integration.gradle.JacksonAnnotationTask'

    args = ["$buildDir/generated/jaxb/java/claimsApiModelClasses", "com", outputDirectory]
}

Once I made this change, rerunning the compile task told me that all tasks were up to date, nothing to be rerun! Fantastic!

For more information on incremental builds, see:
https://docs.gradle.org/current/userguide/incremental_build.html

For other blog posts on Gradle, see:
Dependencies and configurations in Gradle
Gradle release plugin
Using test fixtures in Gradle and Maven
Code coverage with Gradle and Jacoco

Posted in Gradle, Java | Tagged | Leave a comment