Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
Javac and Java Katas, Part 1: Class Path
Singleton: 6 Ways To Write and Use in Java Programming
The ExecutorService in Java provides a flexible and efficient framework for asynchronous task execution. It abstracts away the complexities of managing threads manually and allows developers to focus on the logic of their tasks. Overview The ExecutorService interface is part of the java.util.concurrent package and represents an asynchronous task execution service. It extends the Executor interface, which defines a single method execute(Runnable command) for executing tasks. Executors Executors is a utility class in Java that provides factory methods for creating and managing different types of ExecutorService instances. It simplifies the process of instantiating thread pools and allows developers to easily create and manage executor instances with various configurations. The Executors class provides several static factory methods for creating different types of executor services: FixedThreadPool: Creates an ExecutorService with a fixed number of threads. Tasks submitted to this executor are executed concurrently by the specified number of threads. If a thread is idle and no tasks are available, it remains alive but dormant until needed. Java ExecutorService executor = Executors.newFixedThreadPool(5); CachedThreadPool: Creates an ExecutorService with an unbounded thread pool that automatically adjusts its size based on the workload. Threads are created as needed and reused for subsequent tasks. If a thread remains idle for a certain period, it may be terminated to reduce resource consumption. In a cached thread pool, submitted tasks are not queued but immediately handed off to a thread for execution. If no threads are available, a new one is created. If a server is so heavily loaded that all of its CPUs are fully utilized, and more tasks arrive, more threads will be created, which will only make matters worse. Idle time of threads is default to 60s, after which if they don't have any task thread will be terminated. Therefore, in a heavily loaded production server, you are much better off using Executors.newFixedThreadPool, which gives you a pool with a fixed number of threads, or using the ThreadPoolExecutor class directly, for maximum control. Java ExecutorService executor = Executors.newCachedThreadPool(); SingleThreadExecutor: Creates an ExecutorService with a single worker thread. Tasks are executed sequentially by this thread in the order they are submitted. This executor is useful for tasks that require serialization or have dependencies on each other. Java ExecutorService executor = Executors.newSingleThreadExecutor(); ScheduledThreadPool: Creates an ExecutorService that can schedule tasks to run after a specified delay or at regular intervals. It provides methods for scheduling tasks with fixed delay or fixed rate, allowing for periodic execution of tasks. newWorkStealingPool: Creates a work-stealing thread pool with the target parallelism level. This executor is based on the ForkJoinPool and is capable of dynamically adjusting its thread pool size to utilize all available processor cores efficiently. Overall, the Executors class simplifies the creation and management of executor instances. ExecutorService Tasks can be submitted to an ExecutorService for execution. These tasks are typically instances of Runnable or Callable, representing units of work that need to be executed asynchronously. Below are the methods in ExecutorService. 1. execute(Runnable command): Executes the given task asynchronously. Java ExecutorService executor = Executors.newFixedThreadPool(5); executor.execute(() -> { System.out.println("Task executed asynchronously"); }); 2. submit(Callable<T> task): Submits a task for execution and returns a Future representing the pending result of the task. Java ExecutorService executor = Executors.newSingleThreadExecutor(); Future<Integer> future = executor.submit(() -> { // Task logic return 42; }); 3. shutdown(): Initiates an orderly shutdown of the ExecutorService, allowing previously submitted tasks to execute before terminating. 4. shutdownNow(): Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution. Java List<Runnable> pendingTasks = executor.shutdownNow(); 5. awaitTermination(long timeout, TimeUnit unit): Blocks until all tasks have completed execution after a shutdown request, or the timeout occurs, or the current thread is interrupted, whichever happens first. Java boolean terminated = executor.awaitTermination(10, TimeUnit.SECONDS); if (terminated) { System.out.println("All tasks have completed execution"); } else { System.out.println("Timeout occurred before all tasks completed"); } 6. invokeAny(Collection<? extends Callable<T>> tasks): Executes the given tasks, returning the result of one that successfully completes. This method is useful when we have multiple tasks to run but we only care about the result of whichever one completes first. All other tasks are terminated. Java ExecutorService executor = Executors.newCachedThreadPool(); Set<Callable<String>> callables = new HashSet<>(); callables.add(() -> "Task 1"); callables.add(() -> "Task 2"); String result = executor.invokeAny(callables); System.out.println("Result: " + result); 7. invokeAll(Collection<? extends Callable<T>> tasks): Executes the given tasks, returning a list of Future objects representing their pending results. Java List<Callable<Integer>> tasks = Arrays.asList(() -> 1, () -> 2, () -> 3); List<Future<Integer>> futures = executor.invokeAll(tasks); for (Future<Integer> future : futures) { System.out.println("Result: " + future.get()); } Implementations The ExecutorService interface is typically implemented by various classes provided by the Java concurrency framework, such as ThreadPoolExecutor, ScheduledThreadPoolExecutor, and ForkJoinPool. Considerations Careful configuration of thread pool size to avoid underutilization or excessive resource consumption. Consider factors such as task submission rate, task priority, resource constraints, and the desired behavior in case of queue overflow. Choose the queue type that best meets your application's requirements for scalability, performance, and resource utilization. Proper handling of exceptions and task cancellation to ensure robustness and reliability. Understanding the concurrency semantics and potential thread safety issues in concurrent code. To create an instance of ExecutorService, we can pass ThreadFactory and task queue to be used while creating the pool. A ThreadFactory is an interface used to create new threads. It provides a way to encapsulate the logic for creating threads, allowing for customization of thread creation behavior. The primary purpose of a ThreadFactory is to decouple the thread creation process from the rest of the application logic, making it easier to manage and customize thread creation. It is preferred to pass custom Thread factory, as helps in setting thread prefix and priority if required. Java static final String prefix = "app.name.task"; ExecutorService executorService = Executors.newFixedThreadPool(5, () -> { Thread t = new Thread(r); t.setName(prefix + "-" + t.getId()); // Customize thread name if needed return t; }); TaskQueues When tasks are submitted to ExecutorService, if none of the threads in pool are available to process the tasks, they get stored in a queue, below are the different queue options to choose from. Unbounded Queue: An unbounded queue, such as LinkedBlockingQueue, has no fixed capacity and can grow dynamically to accommodate an unlimited number of tasks. It is suitable for scenarios where the task submission rate is unpredictable or where tasks need to be queued indefinitely without the risk of rejection due to queue overflow. However, keep in mind that unbounded queues can potentially lead to memory exhaustion if tasks are submitted at a faster rate than they can be processed. Bounded Queue: A bounded queue, such as ArrayBlockingQueue with a specified capacity, has a fixed size limit and can only hold a finite number of tasks. It is suitable for scenarios where resource constraints or backpressure mechanisms need to be enforced to prevent excessive memory usage or system overload. Tasks may be rejected or handled according to a specified rejection policy when the queue reaches its capacity. Priority Queue: A priority queue, such as PriorityBlockingQueue, orders tasks based on their priority or a specified comparator. It is suitable for scenarios where tasks have different levels of importance or urgency, and higher-priority tasks need to be processed before lower-priority ones. Priority queues ensure that tasks are executed in the order of their priority, regardless of their submission order. Synchronous Queue: A synchronous queue, such as SynchronousQueue, is a special type of queue that enables one-to-one task handoff between producer and consumer threads. It has a capacity of zero and requires both a producer and a consumer to be available simultaneously for task exchange to occur. Synchronous queues are suitable for scenarios where strict synchronization and coordination between threads are required, such as handoff between thread pools or bounded resource access. ScheduledThreadPool The ScheduledThreadPoolExecutor inherits thread pool management capabilities from ThreadPoolExecutor and provides functionalities for scheduling tasks to run after a given delay or periodically at defined intervals. Here's a detailed explanation: Runnable and Callable Tasks: You define tasks you want to schedule using these interfaces, similar to a regular ExecutorService. ScheduledFuture: This interface represents the result of a scheduled task submission. It allows checking the task's completion status, canceling the task before execution, and (for Callable tasks) retrieving the result upon completion. Scheduling Capabilities schedule(Runnable task, long delay, TimeUnit unit): Schedules a Runnable task to be executed after a specified delay in the given time unit (e.g., seconds, milliseconds). scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit): Schedules a fixed-rate execution of a Runnable task. The task is first executed after the initialDelay, and subsequent executions occur with a constant period between them. scheduleWithFixedDelay(Runnable command, long initialDelay, long delay, TimeUnit unit): Schedules a fixed-delay execution of a Runnable task. Similar to scheduleAtFixedRate, but the delay is measured between the completion of the previous execution and the start of the next. Key Considerations Thread Pool Management: ScheduledThreadPoolExecutor maintains a fixed-sized thread pool by default. You can configure the pool size during object creation. Delayed Execution: Scheduled tasks are not guaranteed to execute precisely at the specified time. The actual execution time might be slightly different due to factors like thread availability and workload. Missed Executions: With fixed-rate scheduling, if the task execution time exceeds the period, subsequent executions might be skipped to maintain the fixed rate. Cancellation: You can cancel a scheduled task using the cancel method of the returned ScheduledFuture object. However, cancellation success depends on the task's state (not yet started, running, etc.). Java import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; public class ScheduledThreadPoolExample { public static void main(String[] args) throws InterruptedException { // Create a ScheduledThreadPoolExecutor with 2 threads ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2); // Schedule a task with a 2-second delay Runnable task1 = () -> System.out.println("Executing task 1 after a delay"); scheduler.schedule(task1, 2, TimeUnit.SECONDS); // Schedule a task to run every 5 seconds with a fixed rate Runnable task2 = () -> System.out.println("Executing task 2 at fixed rate"); scheduler.scheduleAtFixedRate(task2, 1, 5, TimeUnit.SECONDS); // Schedule a task to run every 3 seconds with a fixed delay Runnable task3 = () -> System.out.println("Executing task 3 with fixed delay"); scheduler.scheduleWithFixedDelay(task3, 0, 3, TimeUnit.SECONDS); // Wait for some time to allow tasks to be executed Thread.sleep(15000); // Shutdown the scheduler scheduler.shutdown(); } } Shut Down ExecutorService Gracefully To efficiently shut down an ExecutorService, you can follow these steps: Call the shutdown() method to initiate the shutdown process. This method allows previously submitted tasks to execute before terminating but prevents the submission of new tasks. Call the shutdownNow() method if you want to force the ExecutorService to terminate immediately. This method attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution but were never started. Await termination by calling the awaitTermination() method. This method blocks until all tasks have completed execution after a shutdown request, or the timeout occurs, or the current thread is interrupted, whichever happens first. Here's an example: Java ExecutorService executor = Executors.newFixedThreadPool(10); // Execute tasks using the executor // Shutdown the executor executor.shutdown(); try { // Wait for all tasks to complete or timeout after a certain period if (!executor.awaitTermination(60, TimeUnit.SECONDS)) { // If the timeout occurs, force shutdown executor.shutdownNow(); // Optionally, wait for the tasks to be forcefully terminated if (!executor.awaitTermination(60, TimeUnit.SECONDS)) { // Log a message indicating that some tasks failed to terminate } } } catch (InterruptedException ex) { // Log interruption exception executor.shutdownNow(); // Preserve interrupt status Thread.currentThread().interrupt(); } In summary, ExecutorService is a versatile framework that helps developers write efficient, scalable, and maintainable concurrent code.
Java ORM world is very steady and few libraries exist, but none of them brought any breaking change over the last decade. Meanwhile, application architecture evolved with some trends such as Hexagonal Architecture, CQRS, Domain Driven Design, or Domain Purity. Stalactite tries to be more suitable to these new paradigms by allowing to persist any kind of Class without the need to annotate them or use external XML files: its mapping is made of method reference. As a benefit, you get a better view of the entity graph since the mapping is made through a fluent API that chains your entity relations, instead of spreading annotations all over entities. This is very helpful to see the complexity of your entity graph, which would impact its load as well as the memory. Moreover, since Stalactite only fetches data eagerly, we can say that what you see is what you get. Here is a very small example: Java MappingEase.entityBuilder(Country.class, Long.class) .mapKey(Country::getId, IdentifierPolicy.afterInsert()) .mapOneToOne(Country::getCapital, MappingEase.entityBuilder(City.class, Long.class) .mapKey(City::getId, IdentifierPolicy.afterInsert()) .map(City::getName)) First Steps The release 2.0.0 is out for some weeks and is available as a Maven dependency, hereafter is an example with HSQLDB. For now, Stalactite is compatible with the following databases (mainly in their latest version): HSQLDB, H2, PostgreSQL, MySQL, and MariaDB. XML <dependency> <groupId>org.codefilarete.stalactite</groupId> <artifactId>orm-hsqldb-adapter</artifactId> <version>2.0.0</version> </dependency> If you're interested in a less database-vendor-dedicated module, you can use the orm-all-adapter module. Just be aware that it will bring you extra modules and extra JDBC drivers, heaving your artifact. After getting Statactite as a dependency, the next step is to have a JDBC DataSource and pass it to a org.codefilarete.stalactite.engine.PersistenceContext: Java org.hsqldb.jdbc.JDBCDataSource dataSource= new org.hsqldb.jdbc.JDBCDataSource(); dataSource.setUrl("jdbc:hsqldb:mem:test"); dataSource.setUser("sa"); dataSource.setPassword(""); PersistenceContext persistenceContext = new PersistenceContext(dataSource, new HSQLDBDialect()); Then comes the interesting part: the mapping. Supposing you get a Country, you can quickly set up its mapping through the Fluent API, starting with the org.codefilarete.stalactite.mapping.MappingEase class as such: Java EntityPersister<Country, Long> countryPersister = MappingEase.entityBuilder(Country.class, Long.class) .mapKey(Country::getId, IdentifierPolicy.afterInsert()) .map(Country::getName) .build(persistenceContext); the afterInsert() identifier policy means that the country.id column is an auto-increment one. Two other policies exist: the beforeInsert() for identifier given by a database Sequence (for example), and the alreadyAssigned() for entities that have a natural identifier given by business rules, any non-declared property is considered transient and not managed by Stalactite. The schema can be generated with the org.codefilarete.stalactite.sql.ddl.DDLDeployer class as such (it will generate it into the PersistenceContext dataSource): Java DDLDeployer ddlDeployer = new DDLDeployer(persistenceContext); ddlDeployer.deployDDL(); Finally, you can persist your entities thanks to the EntityPersister obtained previously, please find the example below. You might notice that you won't find JPA methods in Stalactite persister. The reason is that Stalactite is far different from JPA and doesn't aim at being compatible with it: no annotation, no attach/detach mechanism, no first-level cache, no lazy loading, and many more. Hence, the methods are quite straight to their goal: Java Country myCountry = new Country(); myCountry.setName("myCountry"); countryPersister.insert(myCountry); myCountry.setName("myCountry with a different name"); countryPersister.update(myCountry); Country loadedCountry = countryPersister.select(myCountry.getId()); countryPersister.delete(loadedCountry); Spring Integration There was a raw usage of Stalactite, meanwhile, you may be interested in its integration with Spring to benefit from the magic of its @Repository. Stalactite provides it, just be aware that it's still a work-in-progress feature. The approach to activate it is the same as for JPA: enable Stalactite repositories thanks to the @EnableStalactiteRepositories annotation on your Spring application. Then you'll declare the PersistenceContext and EntityPersister as @Bean : Java @Bean public PersistenceContext persistenceContext(DataSource dataSource) { return new PersistenceContext(dataSource); } @Bean public EntityPersister<Country, Long> countryPersister(PersistenceContext persistenceContext) { return MappingEase.entityBuilder(Country.class, long.class) .mapKey(Country::getId, IdentifierPolicy.afterInsert()) .map(Country::getName) .build(persistenceContext); } Then you can declare your repository as such, to be injected into your services : Java @Repository public interface CountryStalactiteRepository extends StalactiteRepository<Country, Long> { } As mentioned earlier, since the paradigm of Stalactite is not the same as JPA (no annotation, no attach/detach mechanism, etc), you won't find the same methods of JPA repository in Stalactite ones : save : Saves the given entity, either inserting it or updating it according to its persistence states saveAll : Same as the previous one, with a massive API findById : Try to find an entity by its id in the database findAllById : Same as the previous one, with a massive API delete : Delete the given entity from the database deleteAll : Same as the previous one, with a massive API Conclusion In these chapters we introduced the Stalactite ORM, more information about the configuration, the mapping, and all the documentation are available on the website. The project is open-source with the MIT license and shared through Github. Thanks for reading, any feedback is appreciated!
Integrating assets from diverse platforms and ecosystems presents a significant challenge in enterprise application development, where projects often span multiple technologies and languages. Seamlessly incorporating web-based assets such as JavaScript, CSS, and other resources is a common yet complex requirement in Java web applications. The diversity of development ecosystems — each with its tools, package managers, and distribution methods — complicates including these assets in a unified development workflow. This fragmentation can lead to inefficiencies, increased development time, and potential for errors as developers navigate the intricacies of integrating disparate systems. Recognizing this challenge, the open-source project Npm2Mvn offers a solution to streamline the inclusion of NPM packages into Java workspaces, thereby bridging the gap between the JavaScript and Java ecosystems. Understanding NPM and Maven Before diving into the intricacies of Npm2Mvn, it's essential to understand the platforms it connects: NPM and Maven. NPM (Node Package Manager) is the default package manager for Node.js, primarily used for managing dependencies of various JavaScript projects. It hosts thousands of packages developers provide worldwide, facilitating the sharing and distribution of code. NPM simplifies adding, updating, and managing libraries and tools in your projects, making it an indispensable tool for JavaScript developers. Maven, on the other hand, is a powerful build automation tool used primarily for Java projects. It goes beyond simple build tasks by managing project dependencies, documentation, SCM (Source Code Management), and releases. Maven utilizes a Project Object Model (POM) file to manage a project's build configuration, dependencies, and other elements, ensuring developers can easily manage and build their Java applications. The Genesis of Npm2Mvn Npm2Mvn emerges as a solution to a familiar challenge developers face: incorporating the vast array of JavaScript libraries and frameworks available on NPM into Java projects. While Java and JavaScript operate in markedly different environments, the demand for utilizing web assets (like CSS, JavaScript files, and fonts) within Java applications has grown exponentially. It is particularly relevant for projects that require rich client interfaces or the server-side rendering of front-end components. Many Javascript projects are distributed exclusively through NPM, so like me, if you have found yourself copying and pasting assets from an NPM archive across to your Java web application workspace, then Npm2Mvn is just the solution you need. Key Features of Npm2Mvn Designed to automate the transformation of NPM packages into Maven-compatible jar files, Npm2Mvn makes NPM packages readily consumable by Java developers. This process involves several key steps: Standard Maven repository presentation: Utilizing another open-source project, uHTTPD, NPM2MVN presents itself as a standard Maven repository. Automatic package conversion: When a request for a Maven artifact in the group npm is received, NPM2MVN fetches the package metadata and tarball from NPM. It then enriches the package with additional metadata required for Maven, such as POM files and MANIFEST.MF. Inclusion of additional metadata: Besides standard Maven metadata, NPM2MVN adds specific metadata for Graal native images, enhancing compatibility and performance for projects leveraging GraalVM. Seamless integration into local Maven cache: The final jar file, enriched with the necessary metadata, is placed in the local Maven cache, just like any other artifact, ensuring that using NPM packages in Java projects is as straightforward as adding a Maven dependency. Benefits for Java Developers Npm2Mvn offers several compelling benefits for Java developers: Access to a vast repository of JavaScript libraries: By bridging NPM and Maven, Java developers can easily incorporate thousands of JavaScript libraries and frameworks into their projects. This access significantly expands the resources for enhancing Java applications, especially for UI/UX design, without leaving the familiar Maven ecosystem. Simplified dependency management: Managing dependencies across different ecosystems can be cumbersome. Npm2Mvn streamlines this process, allowing developers to handle NPM packages with the Maven commands they are accustomed to. Enhanced productivity: By automating the conversion of NPM packages to Maven artifacts, NPM2MVN saves developers considerable time and effort. This efficiency boost enables developers to focus more on building their applications than wrestling with package management intricacies. Real-world applications: Projects like Fontawesome, Xterm, and Bootstrap, staples for frontend development, can seamlessly integrate into Java applications. How To Use Using Npm2Mvn is straightforward. Jadaptive, the project's developers, host a repository here. This repository is open and free to use. You can also download a copy of the server to host in a private build environment. To use this service, add the repository entry to your POM file. XML <repositories> <repository> <id>npm2mvn</id> <url>https://npm2mvn.jadaptive.com</url> </repository> </repositories> Now, declare your NPM packages. For example, I am including the JQuery NPM package here. XML <dependency> <groupId>npm</groupId> <artifactId>jquery</artifactId> <version>3.7.1</version> </dependency> That's all we need to include and version manage NPM packages into the classpath. Consuming the NPM Resources in Your Java Application The resources of the NPM package are placed in the jar under a fixed prefix, allowing multiple versions of multiple NPM packages to be available to the JVM via the classpath or module path. For example, if the NPM package bootstrap@v5.3.1 contains a resource with the path css/bootstrap.css, then the Npm2Mvn package will make that resource available at the resource path /npm2mvn/npm/bootstrap/5.3.1/css/boostrap.css. Now that you know the path of the resources in your classpath, you can prepare to consume them in your Java web application by implementing a Servlet or other mechanism to serve the resources from the classpath. How you do this depends on your web application platform and any framework you use. In Spring Boot, we would add a resource handler as demonstrated below. Java @Configuration @EnableWebMvc public class MvcConfig implements WebMvcConfigurer { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry .addResourceHandler("/npm2mvn/**") .addResourceLocations("classpath:/npm2mvn/"); } } With this configuration in a Spring Boot application, we can now reference NPM assets directly in HTML files we use in the application. HTML <script type="text/javascript" src="/npm2mvn/npm/jquery/3.7.1/dist/jquery.min.js"> But What About NPM Scopes? NPM version 2 supports scopes which, according to their website: ... allows you to create a package with the same name as a package created by another user or organization without conflict. In the examples above, we are not using scopes. If the package you require uses a scope, you must modify your pom.xml dependency and the resource path. Taking the FontAwesome project as an example, to include the @fortawesome/fontawesome-free module in our Maven build, we modify the groupId to include the scope as demonstrated below. XML <dependency> <groupId>npm.fortawesome</groupId> <artifactId>fontawesome-free</artifactId> <version>6.5.1</version> </dependency> Similarly, in the resource path, we change the second path value from 'npm' to the same groupId we used above. HTML <link rel="stylesheet" href="/npm2nvm/npm.fortawesome/fontawesome-free/6.5.1/css/all.css"/> You can download a full working Spring Boot example that integrates the Xterm NPM module and add-ons from GitHub. Dependency Generator The website at the hosted version of Npm2Mvn provides a useful utility that developers can use to get the correct syntax for the dependencies needed to build the artifacts. Here we have entered the scope, package, and version to get the correct dependency entry for the Maven build. If the project does not have a scope simply leave the first field blank. Conclusion Npm2Mvn bridges the JavaScript and Java worlds, enhancing developers' capabilities and project possibilities. By simplifying the integration of NPM packages into Java workspaces, Npm2Mvn promotes a more interconnected and efficient development environment. It empowers developers to leverage the best of both ecosystems in their applications.
In modern web applications, integrating with external services is a common requirement. However, when interacting with these services, it's crucial to handle scenarios where responses might be delayed or fail to arrive. Spring Boot, with its extensive ecosystem, offers robust solutions to address such challenges. In this article, we'll explore how to implement timeouts using three popular approaches: RestClient, RestTemplate, and WebClient, all essential components in Spring Boot. 1. Timeout With RestTemplate First, let's demonstrate setting a timeout using RestTemplate, a synchronous HTTP client. Java import org.springframework.web.client.RestTemplate; import org.springframework.http.ResponseEntity; import org.springframework.http.HttpStatus; public class RestTemplateExample { public static void main(String[] args) { var restTemplate = new RestTemplate(); var url = "https://api.example.com/data"; var timeout = 5000; // Timeout in milliseconds restTemplate.getForEntity(url, String.class); System.out.println(response.getBody()); } } In this snippet, we're performing a GET request to `https://api.example.com/data`. However, we haven't set any timeout, which means the request might hang indefinitely in case of network issues or server unavailability. To set a timeout, we need to configure RestTemplate with an appropriate `ClientHttpRequestFactory`, such as `HttpComponentsClientHttpRequestFactory`. Java import org.springframework.web.client.RestTemplate; import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.http.ResponseEntity; import org.springframework.http.HttpStatus; public class RestTemplateTimeoutExample { public static void main(String[] args) { var url = "https://api.example.com/data"; var timeout = 5000; var clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory(); clientHttpRequestFactory.setConnectTimeout(timeout); clientHttpRequestFactory.setConnectionRequestTimeout(timeout); var restTemplate = new RestTemplate(clientHttpRequestFactory); restTemplate.getForEntity(url, String.class); System.out.println(response.getBody()); } } 2. Timeout With WebClient WebClient is a non-blocking, reactive HTTP client introduced in Spring WebFlux. Let's see how we can use it with a timeout: Java import org.springframework.web.reactive.function.client.WebClient; import java.time.Duration; public class WebClientTimeoutExample { public static void main(String[] args) { var client = WebClient.builder() .baseUrl("https://api.example.com") .build(); client.get() .uri("/data") .retrieve() .bodyToMono(String.class) .timeout(Duration.ofMillis(5000)) .subscribe(System.out::println); } } Here, we're using WebClient to make a GET request to `/data` endpoint. The `timeout` operator specifies a maximum duration for the request to wait for a response. 3. Timeout With RestClient RestClient is a synchronous HTTP client that offers a modern, fluent API since Spring Boot 3.2. New Spring Boot applications should replace RestTemplate code with RestClient API. Now, let's implement a RestClient with timeout using `HttpComponentsClientHttpRequestFactory`: Java import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.web.client.RestTemplate; import org.springframework.web.reactive.function.client.WebClient; import java.time.Duration; public class RestClientTimeoutExample { public static void main(String[] args) { var factory = new HttpComponentsClientHttpRequestFactory(); factory.setConnectTimeout(5000); factory.setReadTimeout(5000); var restClient = RestClient .builder() .requestFactory(clientHttpRequestFactory) .build(); var response = restClient .get() .uri("https://api.example.com/data") .retrieve() .toEntity(String.class); System.out.println(response.getBody()); } } In this code, we define a specified timeout using HttpComponentsClientHttpRequestFactory and use it in RestClient.builder(). By setting timeouts appropriately, we ensure that our application remains responsive even in scenarios where external services are slow or unresponsive. This proactive approach enhances the overall reliability and resilience of our Spring Boot applications. Conclusion In summary, handling timeouts is important for web apps to stay responsive and robust during interactions with external services. We explored three popular Spring Boot approaches for implementing timeouts effectively: RestTemplate, WebClient, and RestClient. By setting appropriate timeouts, developers can ensure applications gracefully handle delayed or failed responses and enhance overall reliability and user experience in network conditions and service availability.
Do you want to learn how to create Tweets from a Java application using the X (Twitter) API v2? This blog will show you in a step-by-step guide how to do so. Enjoy! Introduction X (Twitter) provides an API that allows you to interact with your account from an application. Currently, two versions exist. In this blog, you will use the most recent X API v2. Although a lot of information can be found on how to set up your environment and how to interact with the API, it took me quite some time to do so from within a Java application. In this blog, you will learn how to set up your account and how you can create tweets from a Java application. The sources for this blog can be found on GitHub. Prerequisites Prerequisites for this blog are: Basic knowledge of Java, Java 21 is used; An X account; A website you own (not mandatory, but for security reasons the better). Set up Developer Account The first thing to do is to set up a developer account. Navigate to the sign-up page. Beware that there exist multiple types of accounts: free account, basic account, pro account, and enterprise account. Scroll down all the way to the bottom of the page and choose to create a free account by clicking the button Sign up Free Account. You will need to describe your use case using at least 250 characters. After signing up, you end up in the developer portal. Create Project and App With the Free tier, you can create one Project and one App. Create the Project and the App. Authentication As you are going to create Tweets for a user, you will need to set up the authentication using OAuth 2.0 Authorization Code Flow with PKCE. However, it is important that you have configured your App correctly. Navigate to your App in the developer portal and click the Edit button in the User authentication settings. Different sections are available here where you are required to add information and to choose options. App Permissions These permissions enable OAuth 1.0a Authentication. It is confusing that you need to check one of these bullets because OAuth 1.0a Authentication will not be used in your use case. However, because you will create a tweet, select Read and Write, just to be sure. Type of App The type of App will enable OAuth 2.0 Authentication, this is the one you will use. You will invoke the API from an application. Therefore, choose a Web App, Automated App, or Bot. App Info In the App info section, you need to provide a Callback URI and a Website URL. The Callback URI is important as you will see later on in the next paragraphs. Fill in the URL of your website. You can use any URL, but the Callback URI will be used to provide you with an access token, so it is better to use the URL of a website you own. Click the Save button to save your changes. A Client ID and Client Secret are generated and save them. Create Tweet Everything is set up in the developer portal. Now it is time to create the Java application in order to be able to create a Tweet. Twitter API Client Library for Java In order to create the tweet, you will make use of the Twitter API Client Library for Java. Beware that, at the time of writing, the library is still in beta. The library also only supports the X API v2 endpoints, but that is exactly the endpoints you will be using, so that is not a problem. Add the dependency to the pom file: XML <dependency> <groupId>com.twitter</groupId> <artifactId>twitter-api-java-sdk</artifactId> <version>2.0.3</version> </dependency> Authorization In order to be able to create tweets on behalf of your account, you need to authorize the App. The source code below is based on the example provided in the SDK. You need the Client ID and Client Secret you saved before. If you lost them, you can generate a new secret in the developer portal. Navigate to your App, and click the Keys and Tokens tab. Scroll down retrieve the Client ID and generate a new Client Secret. The main method executes the following steps: Set the correct credentials as environment variables: TWITTER_OAUTH2_CLIENT_ID: the OAuth 2.0 client ID TWITTER_OAUTH2_CLIENT_SECRET: the Oauth 2.0 Client Secret TWITTER_OAUTH2_ACCESS_TOKEN: leave it blank TWITTER_OAUTH2_REFRESH_TOKEN: leave it blank Authorize the App and retrieve an access and refresh token. Set the newly received access and refresh token in the credentials object. Call the X API in order to create the tweet. Java public static void main(String[] args) { TwitterCredentialsOAuth2 credentials = new TwitterCredentialsOAuth2(System.getenv("TWITTER_OAUTH2_CLIENT_ID"), System.getenv("TWITTER_OAUTH2_CLIENT_SECRET"), System.getenv("TWITTER_OAUTH2_ACCESS_TOKEN"), System.getenv("TWITTER_OAUTH2_REFRESH_TOKEN")); OAuth2AccessToken accessToken = getAccessToken(credentials); if (accessToken == null) { return; } // Setting the access & refresh tokens into TwitterCredentialsOAuth2 credentials.setTwitterOauth2AccessToken(accessToken.getAccessToken()); credentials.setTwitterOauth2RefreshToken(accessToken.getRefreshToken()); callApi(credentials); } The getAccessToken method executes the following steps: Creates a Twitter service object: Set the Callback URI to the one you specified in the developer portal. Set the scopes (what is allowed) you want to authorize. By using offline.access, you will receive a refresh token which allows you to retrieve a new access token without prompting the user via the refresh token flow. This means that you can continuously create tweets without the need of user interaction. An authorization URL is provided to you where you will authorize the App for the requested scopes. You are redirected to the Callback URI and in the URL the authorization code will be visible. The getAccessToken method waits until you copy the authorization code and hit enter. The access token and refresh token are printed to the console and returned from the method. Java private static OAuth2AccessToken getAccessToken(TwitterCredentialsOAuth2 credentials) { TwitterOAuth20Service service = new TwitterOAuth20Service( credentials.getTwitterOauth2ClientId(), credentials.getTwitterOAuth2ClientSecret(), "<Fill in your Callback URI as configured in your X App in the developer portal>", "offline.access tweet.read users.read tweet.write"); OAuth2AccessToken accessToken = null; try { final Scanner in = new Scanner(System.in, "UTF-8"); System.out.println("Fetching the Authorization URL..."); final String secretState = "state"; PKCE pkce = new PKCE(); pkce.setCodeChallenge("challenge"); pkce.setCodeChallengeMethod(PKCECodeChallengeMethod.PLAIN); pkce.setCodeVerifier("challenge"); String authorizationUrl = service.getAuthorizationUrl(pkce, secretState); System.out.println("Go to the Authorization URL and authorize your App:\n" + authorizationUrl + "\nAfter that paste the authorization code here\n>>"); final String code = in.nextLine(); System.out.println("\nTrading the Authorization Code for an Access Token..."); accessToken = service.getAccessToken(pkce, code); System.out.println("Access token: " + accessToken.getAccessToken()); System.out.println("Refresh token: " + accessToken.getRefreshToken()); } catch (Exception e) { System.err.println("Error while getting the access token:\n " + e); e.printStackTrace(); } return accessToken; } Now that you authorized the App, you are able to see that you have done so in your X settings. Navigate to Settings and Privacy in your X account. Navigate to Security and account access. Navigate to Apps and sessions. Navigate to Connected apps. Here you will find the App you authorized and which authorizations it has. The callApi method executes the following steps: Create a TwitterApi instance with the provided credentials. Create a TweetRequest. Create the Tweet. Java private static void callApi(TwitterCredentialsOAuth2 credentials) { TwitterApi apiInstance = new TwitterApi(credentials); TweetCreateRequest tweetCreateRequest = new TweetCreateRequest(); // TweetCreateRequest | tweetCreateRequest.setText("Hello World!"); try { TweetCreateResponse result = apiInstance.tweets().createTweet(tweetCreateRequest) .execute(); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling TweetsApi#createTweet"); System.err.println("Status code: " + e.getCode()); System.err.println("Reason: " + e.getResponseBody()); System.err.println("Response headers: " + e.getResponseHeaders()); e.printStackTrace(); } } Add an sdk.properties file to the root of the repository, otherwise, an Exception will be thrown (the Exception is not blocking, but spoils the output). If everything went well, you now have created your first Tweet! Obtain New Access Token You only need to execute the source code above once. The retrieved access token, however, will only stay valid for two hours. After that time (or earlier), you need to retrieve a new access token using the refresh token. The source code below is based on the example provided in the SDK. The main method executes the following steps: Set the credentials including the access and refresh token you obtained in the previous sections. Add a callback to the TwitterApi instance in order to retrieve a new access and refresh token. Request to refresh the token, the callback method MainToken will set the new tokens and again a Tweet can be created. Java public static void main(String[] args) { TwitterApi apiInstance = new TwitterApi(new TwitterCredentialsOAuth2(System.getenv("TWITTER_OAUTH2_CLIENT_ID"), System.getenv("TWITTER_OAUTH2_CLIENT_SECRET"), System.getenv("TWITTER_OAUTH2_ACCESS_TOKEN"), System.getenv("TWITTER_OAUTH2_REFRESH_TOKEN"))); apiInstance.addCallback(new MaintainToken()); try { apiInstance.refreshToken(); } catch (Exception e) { System.err.println("Error while trying to refresh existing token : " + e); e.printStackTrace(); return; } callApi(apiInstance); } The MaintainToken method only sets the new tokens. Java class MaintainToken implements ApiClientCallback { @Override public void onAfterRefreshToken(OAuth2AccessToken accessToken) { System.out.println("access: " + accessToken.getAccessToken()); System.out.println("refresh: " + accessToken.getRefreshToken()); } } Conclusion In this blog, you learned how to configure an App in the developer portal. You learned how to authorize your App from a Java application and how to create a Tweet.
Java 21 just got simpler! Want to write cleaner, more readable code? Dive into pattern matching, a powerful new feature that lets you easily deconstruct and analyze data structures. This article will explore pattern matching with many examples, showing how it streamlines normal data handling and keeps your code concise. Examples of Pattern Matching Pattern matching shines in two key areas. First, the pattern matching feature of switch statements replaces the days of long chains of if statements, letting you elegantly match the selector expression against various data types, including primitives, objects, and even null. Secondly, what if you need to check an object's type and extract specific data? The pattern matching feature of instance expressions simplifies this process which allows you to confirm if an object matches a pattern and, if so, conveniently extract the desired data. Let’s take a look at more examples of pattern matching in Java code. Pattern Matching With Switch Statements Java public static String getAnimalSound(Animal animal) { return switch (animal) { case Dog dog -> "woof"; case Cat cat -> "meow"; case Bird bird -> "chirp"; case null -> "No animal found!"; default -> "Unknown animal sound"; }; } Matches selector expressions with types other than integers and strings Uses type patterns (case Dog dog) to check and cast types simultaneously Handles null directly within the switch block (case null) Employs arrow syntax (->) for concise body expressions Pattern Matching With instanceof Java if (object instanceof String str) { System.out.println("The string is: " + str); } else if (object instanceof Integer num) { System.out.println("The number is: " + num); } else { System.out.println("Unknown object type"); } Combines type checking and casting in a single expression Introduces a pattern variable (str, num) to capture the object's value. Avoids explicit casting (String str = (String) object). Pattern Matching With Primitive Types Java int number = 10; switch (number) { case 10: System.out.println("The number is 10."); break; case 20: System.out.println("The number is 20."); break; case 30: System.out.println("The number is 30."); break; default: System.out.println("The number is something else."); } Pattern matching with primitive types doesn't introduce entirely new functionality but rather simplifies existing practices when working with primitives in switch statements. Pattern Matching With Reference Types Java String name = "Daniel Oh"; switch (name) { case "Daniel Oh": System.out.println("Hey, Daniel!"); break; case "Jennie Oh": System.out.println("Hola, Jennie!"); break; default: System.out.println("What’s up!"); } Pattern matching with reference types makes code easier to understand and maintain due to its clear and concise syntax. By combining type checking and extraction in one step, pattern matching reduces the risk of errors associated with explicit casting. More expressive switch statements: Switch statements become more versatile and can handle a wider range of data types and scenarios. Pattern Matching With null Java Object obj = null; switch (obj) { case null: System.out.println("The object is null."); break; default: System.out.println("The object is not null."); } Before Java 21, switch statements would throw a NullPointerException if the selector expression was null. Pattern matching allows a dedicated case null clause to handle this scenario gracefully. By explicitly checking for null within the switch statement, you avoid potential runtime errors and ensure your code is more robust. Having a dedicated case null clause makes the code's intention clearer compared to needing an external null check before the switch. Java's implementation is designed not to break existing code. If a switch statement doesn't have a case null clause, it will still throw a NullPointerException as before, even if a default case exists. Pattern Matching With Multiple Patterns Java List<String> names = new ArrayList<>(); names.add("Daniel Oh"); names.add("Jennie Oh"); for (String name : names) { switch (name) { case "Daniel Oh", "Jennie Oh": System.out.println("Hola, " + name + "!"); break; default: System.out.println("What’s up!"); } } Unlike traditional switch statements, pattern matching considers the order of cases. The first case with a matching pattern is executed. Avoid unreachable code by ensuring subtypes don't appear before their supertypes in the pattern-matching cases. Conclusion Pattern matching is a powerful new feature in Java 21 that can make your code more concise and readable. It is especially useful for working with complex data structures with key benefits: Improved readability: Pattern matching makes code more readable by combining type checking, data extraction, and control flow into a single statement. This eliminates the need for verbose if-else chains and explicit casting. Conciseness: Code becomes more concise by leveraging pattern matching's ability to handle multiple checks and extractions in a single expression. This reduces boilerplate code and improves maintainability. Enhanced type safety: Pattern matching enforces type safety by explicitly checking and potentially casting the data type within the switch statement or instance expression. This reduces the risk of runtime errors caused by unexpected object types. Null handling: Pattern matching allows for the explicit handling of null cases directly within the switch statement. This eliminates the need for separate null checks before the switch, improving code flow and reducing the chance of null pointer exceptions. Flexibility: Pattern matching goes beyond basic types. It can handle complex data structures using record patterns (introduced in Java 14). This allows for more expressive matching logic for intricate data objects. Modern look and feel: Pattern matching aligns with modern functional programming paradigms, making Java code more expressive and aligned with other languages that utilize this feature. Overall, pattern matching in Java 21 streamlines data handling, improves code clarity and maintainability, and enhances type safety for a more robust and developer-friendly coding experience.
In the world of high-performance computing, utilizing SIMD (Single Instruction, Multiple Data) instructions can significantly boost the performance of certain types of computations. SIMD enables processors to perform the same operation on multiple data points simultaneously, making it ideal for tasks like numerical computations, image processing, and multimedia operations. With Java 17, developers now have access to the Vector API, a feature that allows them to harness the power of SIMD directly within their Java applications. In this article, we'll explore what the Vector API is, how it works, and provide examples demonstrating its usage. Understanding SIMD and Its Importance Before delving into the Vector API, it's crucial to understand the concept of SIMD and why it's important for performance optimization. Traditional CPUs execute instructions serially, meaning each instruction operates on a single data element at a time. However, many modern CPUs include SIMD instruction sets, such as SSE (Streaming SIMD Extensions) and AVX (Advanced Vector Extensions), which enable parallel processing of multiple data elements within a single instruction. This parallelism is particularly beneficial for tasks involving repetitive operations on large arrays or datasets. By leveraging SIMD instructions, developers can achieve significant performance gains by exploiting the inherent parallelism of the underlying hardware. Introducing the Vector API The Vector API, introduced in Java 16 as an incubator module (jdk.incubator.vector) and made a standard feature in Java 17, provides a set of classes and methods for performing SIMD operations directly within Java code. The API abstracts the low-level details of SIMD instructions and allows developers to write portable and efficient vectorized code without resorting to platform-specific assembly language or external libraries. The core components of the Vector API include vector types, operations, and factories. Vector types represent SIMD vectors of different sizes and data types, such as integers, floating-point numbers, and boolean values. Operations include arithmetic, logical, and comparison operations that can be performed on vector elements. Factories are used to create vector instances and perform conversions between vector and scalar types. Getting Started With Vector API To utilize the Vector API from Java 17, your environment must be equipped with JDK version 17. The API resides within the java.util.vector package, providing classes and methods for vector operations. A simple example of adding two integer arrays using the Vector API demonstrates its ease of use and efficiency over traditional loop-based methods. Example 1: Adding Two Arrays Element-Wise To demonstrate the usage of the Vector API, let's consider a simple example of adding two arrays element-wise using SIMD instructions. We'll start by creating two arrays of floating-point numbers and then use the Vector API to add them together in parallel. Java import java.util.Arrays; import jdk.incubator.vector.*; public class VectorExample { public static void main(String[] args) { int length = 8; // Number of elements in the arrays float[] array1 = new float[length]; float[] array2 = new float[length]; float[] result = new float[length]; // Initialize arrays with random values Arrays.setAll(array1, i -> (float) Math.random()); Arrays.setAll(array2, i -> (float) Math.random()); // Perform addition using Vector API try (var vscope = VectorScope.create()) { VectorSpecies<Float> species = FloatVector.SPECIES_256; int i = 0; for (; i < length - species.length(); i += species.length()) { FloatVector a = FloatVector.fromArray(species, array1, i); FloatVector b = FloatVector.fromArray(species, array2, i); FloatVector sum = a.add(b); sum.intoArray(result, i); } for (; i < length; i++) { result[i] = array1[i] + array2[i]; } } // Print the result System.out.println("Result: " + Arrays.toString(result)); } } In this example, we create two arrays - array1 and array2 - containing random floating-point numbers. We then use the FloatVector class to perform the SIMD addition of corresponding elements from the two arrays. The VectorScope class is used to manage vectorization scope and ensure proper cleanup of resources. Example 2: Dot Product Calculation Another common operation that benefits from SIMD parallelism is the dot product calculation of two vectors. Let's demonstrate how to compute the dot product of two float arrays using the Vector API. Java import java.util.Arrays; import jdk.incubator.vector.*; public class DotProductExample { public static void main(String[] args) { int length = 8; // Number of elements in the arrays float[] array1 = new float[length]; float[] array2 = new float[length]; // Initialize arrays with random values Arrays.setAll(array1, i -> (float) Math.random()); Arrays.setAll(array2, i -> (float) Math.random()); // Perform dot product using Vector API try (var vscope = VectorScope.create()) { VectorSpecies<Float> species = FloatVector.SPECIES_256; int i = 0; FloatVector sum = species.create(); for (; i < length - species.length(); i += species.length()) { FloatVector a = FloatVector.fromArray(species, array1, i); FloatVector b = FloatVector.fromArray(species, array2, i); sum = sum.add(a.mul(b)); } float dotProduct = sum.reduceLanes(VectorOperators.ADD); for (; i < length; i++) { dotProduct += array1[i] * array2[i]; } System.out.println("Dot Product: " + dotProduct); } } } In this example, we compute the dot product of two arrays array1 and array2 using SIMD parallelism. We use the FloatVector class to perform SIMD multiplication of corresponding elements and then accumulate the results using vector reduction. Example 3: Additional Operations Doubled, with zeros where the original was <= 4: Beyond basic arithmetic, the Vector API supports a broad spectrum of operations, including logical, bitwise, and conversion operations. For instance, the following example demonstrates vector multiplication and conditional masking, showcasing the API's versatility for complex data processing tasks. Java import jdk.incubator.vector.IntVector; import jdk.incubator.vector.VectorMask; import jdk.incubator.vector.VectorSpecies; public class AdvancedVectorExample { public static void example(int[] vals) { VectorSpecies<Integer> species = IntVector.SPECIES_256; // Initialize vector from integer array IntVector vector = IntVector.fromArray(species, vals, 0); // Perform multiplication IntVector doubled = vector.mul(2); // Apply conditional mask VectorMask<Integer> mask = vector.compare(VectorMask.Operator.GT, 4); // Output the result System.out.println(Arrays.toString(doubled.blend(0, mask).toArray())); } } Here, we start by defining a VectorSpecies with the type IntVector.SPECIES_256, which indicates that we are working with 256-bit integer vectors. This species choice means that, depending on the hardware, the vector can hold multiple integers within those 256 bits, allowing parallel operations on them. We then initialize our IntVector from an array of integers, vals, using this species. This step converts our scalar integer array into a vectorized form that can be processed in parallel. Afterward, multiply every element in our vector by 2. The mul method performs this operation in parallel on all elements held within the IntVector, effectively doubling each value. This is a significant advantage over traditional loop-based approaches, where each multiplication would be processed sequentially. Next, we create a VectorMask by comparing each element in the original vector to the value 4 using the compare method with the GT (greater than) operator. This operation produces a mask where each position in the vector that holds a value greater than 4 is set to true, and all other positions are set to false. We then use the blend method to apply our mask to the doubled vector. This method takes two arguments: the value to blend with (0 in this case) and the mask. For each position in the vector where the mask is true, the original value from doubled is retained. Where the mask is false, the value is replaced with 0. This effectively zeros out any element in the doubled vector that originated from a value in vals that was 4 or less. Insights and Considerations When integrating the Vector API into applications, consider the following: Data alignment: For optimal performance, ensure data structures are aligned with vector sizes. Misalignment can lead to performance degradation due to additional processing steps. Loop vectorization: Manually vectorizing loops can lead to significant performance gains, especially in nested loops or complex algorithms. However, it requires careful consideration of loop boundaries and vector sizes. Hardware compatibility: While the Vector API is designed to be hardware-agnostic, performance gains can vary based on the underlying hardware's SIMD capabilities. Testing and benchmarking on target hardware are essential for understanding potential performance improvements. By incorporating these advanced examples and considerations, developers can better leverage the Vector API in Java to write more efficient, performant, and scalable applications. Whether for scientific computing, machine learning, or any compute-intensive task, the Vector API offers a powerful toolset for harnessing the full capabilities of modern hardware. Conclusion The Vector API in Java provides developers with a powerful tool for harnessing the performance benefits of SIMD instructions in their Java applications. By abstracting the complexities of SIMD programming, the Vector API enables developers to write efficient and portable code that takes advantage of the parallelism offered by modern CPU architectures. While the examples provided in this article demonstrate the basic usage of the Vector API, developers can explore more advanced features and optimizations to further improve the performance of their applications. Whether it's numerical computations, image processing, or multimedia operations, the Vector API empowers Java developers to unlock the full potential of SIMD parallelism without sacrificing portability or ease of development. Experimenting with different data types, vector lengths, and operations can help developers maximize the performance benefits of SIMD in their Java applications.
After JUnit 5 was released, a lot of developers just added this awesome new library to their projects, because unlike other versions, in this new version, it is not necessary to migrate from JUnit 4 to 5, you just need to include the new library in your project, and with all the engine of JUnit 5 you can do your new tests using JUnit 5, and the older one with JUnit 4 or 3, will keep running without problem. But what can happen in a big project, a project that was built 10 years ago with two versions of JUnit running in parallel? New developers have started to work on the project, some of them with JUnit experience, others not. New tests are created using JUnit 5, new tests are created using JUnit 4, and at some point a developer without knowledge, when they will create a new scenario in a JUnit 5 test that has been already created, they just include a JUnit 4 annotation, and the test became a mix, some @Test of JUnit 4 and some @Test of JUnit 5, and each day is more difficult to remove the JUnit 4 library. So, how do you solve this problem? First of all, you need to show to your team, what is from JUnit 5 and what is from JUnit 4, so that new tests be created using JUnit 5 instead of JUnit 4. After that is necessary to follow the Boy Scout rule, whenever they pass a JUnit 4 test they must migrate to JUnit 5. Let’s see the main changes released in JUnit 5. All starts by the name, in JUnit 5, you don’t see packages called org.junit5, but rather org.junit.jupiter. To sum up, everything you see with “Jupiter”, it means that is from JUnit 5. They chose this name because Jupiter starts with “JU”, and is the fifth planet from the sun. Another change is about the @Test, this annotation was moved to a new package: org.junit.jupiter.api and now no one attribute like “expected,” or “timeout” is used anymore, use extension instead. For example, for timeout, now you have one annotation for this: @Timeout(value = 100, unit = TimeUnit.MILLISECONDS). Another change is that neither test methods nor classes need to be public. Now instead of using @Before and @After in your test configuration, you have to use @BeforeEach and @AfterEach, and you have also @BeforeAll and @AfterAll. To ignore tests, now you have to use @Disable instead of @Ignore. A great news that was released in JUnit 5 was the annotation @ParameterizedTest, with that is possible to run one test multiple times with different arguments. For example, if you want to test a method that creates some object and you want to validate if the fields are filled correctly, you just do the following: Java @ParameterizedTest @MethodSource("getInvalidSources") void shouldCheckInvalidFields(String name, String job, String expectedMessage) { Throwable exception = catchThrowable(() -> new Client(name, job)); assertThat(exception).isInstanceOf(IllegalArgumentException.class) .hasMessageContaining(expectedMessage); } static Stream<Arguments> getInvalidSources() { return Stream.of(Arguments.arguments("Jean Donato", "", "Job is empty"), Arguments.arguments("", "Dev", "Name is empty")); } There are so many nice features in JUnit 5, I recommend you check it out the JUnit 5 User Guide, to analyze what is useful to your project. Now that all developers know what was changed in JUnit 5, you can start the process of removing JUnit 4 from your project. So, if you are still using JUnit 4 in 2024, and your project is a big project, you will probably have some dependencies using JUnit 4. I recommend you analyze your libraries to check if some of them are using JUnit 4. In the image below I’m using Dependency Analyzer from IntelliJ. As you can see, jersey-test is using JUnit 4, that is, even if I remove JUnit 4 from my project, JUnit 4 will be available to use because Jersey. The easier way will be to bump jersey to 2.35 because JUnit 5 was introduced in jersey-test 2.35, but I can’t update the jersey-test framework because other libraries will break in my project. So, in this case, what can I do? I can exclude JUnit from Jersey with Dependency Exclusions from Maven (like the image below). That way JUnit 4 will not be used anymore, but rather our JUnit 5. When you run some tests that use Jersey, they will not be loaded, because there are methods in Jersey using JUnit 4 annotations, setUp and tearDown, using @Before and @After. To solve this, you can create one “Configuration Class” whose extends JerseyTest implementing setUp and tearDown with @BeforeEach and @AfterEach calling super.setUp() and super.TearDown(). Java public class JerseyConfigToJUnit5 extends JerseyTest { @BeforeEach public void setUp() throws Exception { super.setUp(); } @AfterEach public void tearDown() throws Exception { super.tearDown(); } } So, if you have already checked your libraries and no one has more dependency from JUnit 4, you finally can migrate all your tests to JUnit 5, for this process, there is a good tool that saves you from a lot of work, is OpenRewrite, a automated refactoring ecosystem for source code, they will change all your old packages, the older annotations, and everything to the new one. That’s it folks, now you and your teammates can enjoy JUnit 5 and relax your mind knowing that new tests will be created with JUnit 5 and the project will not become a Frankenstein. So, remember, keep your project up-to-date, because if you forget your libraries, each day will be more difficult to update, always use specifications, and frameworks that follow the specifications, and have a good design in your code, this permits you to change and move with the facility.
Thread dump analysis is a traditional approach followed to analyze the performance bottlenecks in Java-based applications. In the modern era, we have APM tools that provide various metrics and screens to drill down and identify performance issues, even at the code level. But for some of the performance issues or occasions, thread dump analysis still stands as the best way to identify the bottlenecks. When To Use a Thread Dump To analyze any performance issue, it is good to take a series of thread dumps with a 1 to 2-second time gap. Taking 10-15 thread dumps each with 1-2 second intervals helps to analyze the threads that got stuck or execute the same code across thread dumps. Thread dumps can be taken in the following scenarios: The application is hung and not responding The application takes time to respond High CPU usage on the server where the application is running Increase in active threads or total number of threads Thread dumps are also sometimes automatically generated by the application servers. For example, the WebSphere application server generates a thread dump during the OutOfMemoryError situation, which helps to analyze the various states of the thread at that moment. For scenarios #1 and 2, focus should be given to the threads that are in blocked, parked/waiting, and runnable states. For scenario #3, the focus should be given to the threads which are in a runnable state. Some threads in infinite loop execution might cause high CPU usage and looking at runnable state might help to find that. For scenario #4, focus should be given to the threads that are in runnable and parked/wait thread states. In all the scenarios, ignore the threads that are in a parked or timed waiting state, which is waiting for the tasks/requests to be executed. Analysis Tool Usage Using a tool to analyze the thread dumps will give many statistics about the thread and its states. However, sometimes it may not reveal the real bottleneck in the system. It is always better to go through the thread dumps manually and do the analysis via tools like Notepad++. Tools like IBM Thread Dump Analyzer can be used if there are many thread dumps to analyze. It can be helpful to see the thread dumps in an organized view to speed up the analysis process. Though it won’t give many sophisticated statistics like the online analysis tools, it can help to visualize the thread dump better, provide a view to see the threads that got blocked due to another thread, and also help to compare the thread dumps. While analyzing the thread dumps, it is important to know which application server for which the thread dump was taken as that will help to focus on analyzing the right threads. For example, if a thread dump was taken on the WebSphere application server, then the "Web Container" thread pool should be the first place to start the analysis as that is the entry point for the WebSphere application server which will start serving the request that comes to it. Thread Dump Types Generally, two kinds of threads will be there in the thread dump. One category of threads is related to the application and helps to execute the application code. Another category would be the threads which will do the operations, such as reading/writing from the network, heartbeat check, and various other operations including JVM internals like GC, etc. Depending upon the problem, the focus should be given to these two thread categories. Most of the time, application code might be the culprit for the performance bottleneck; hence, focus should be given more to the application threads. Thread Pools Thread dumps show the various thread pools available in the application. In the WebSphere application server, threads with the name "Web Container: <id>" belong to the WebSphere thread pool. Counting the number of such threads should be equivalent to the thread pool size defined. If it goes beyond, that indicates a thread leak in the thread pool. Different thread pool in the thread dumps needs to be verified for their size. ForkJoinPool is another thread pool used by Java CompletableFuture to run the tasks asynchronously. If there are too many asynchronous tasks in this pool, then the size of the pool needs to be increased, or another pool with a bigger size needs to be created. Otherwise, this ForkJoinPool will become a bottleneck for asynchronous task execution. If the application is creating a thread pool using the Java Executor framework, then the default name of "pool-<id1>-thread-<id2>" will be given for those threads. Here "id1" indicates the thread pool number and "id2" indicates the thread count in the thread pool. Sometimes if the developers create new thread pools every time without closing them via the Executor framework, then it will create different pools each time, and the number of threads will increase. It may not create a problem if the threads are not actively executing something, but it will result in an OutOfMemoryError where a new thread can’t be created by reaching the maximum number of thread creation. Looking at different thread pools and ensuring that all of them are within the defined/expected limit is always good while analyzing any thread dumps. Application Methods Focusing on the application methods from the stack trace of the thread dump can help analyze the problem in the application code. If there are synchronized codes or blocks in the application, then the application threads will wait to acquire a lock on an object to enter specific code/block execution. This will be expensive, as only one thread is allowed to enter the code execution by making other threads wait. This kind of situation can be seen in the thread dump where threads wait to acquire the lock of an object. The code can be modified to avoid this synchronization if it is not needed. Conclusion Thread dumps contain various details about the JVM, JVM arguments, memory, GC-related information, the hardware on which it is running, etc. It is always recommended to go through those details which might help the analysis.
I blogged about Java stream debugging in the past, but I skipped an important method that's worthy of a post of its own: peek. This blog post delves into the practicalities of using peek() to debug Java streams, complete with code samples and common pitfalls. Understanding Java Streams Java Streams represent a significant shift in how Java developers work with collections and data processing, introducing a functional approach to handling sequences of elements. Streams facilitate declarative processing of collections, enabling operations such as filter, map, reduce, and more in a fluent style. This not only makes the code more readable but also more concise compared to traditional iterative approaches. A Simple Stream Example To illustrate, consider the task of filtering a list of names to only include those that start with the letter "J" and then transforming each name into uppercase. Using the traditional approach, this might involve a loop and some "if" statements. However, with streams, this can be accomplished in a few lines: List<String> names = Arrays.asList("John", "Jacob", "Edward", "Emily"); // Convert list to stream List<String> filteredNames = names.stream() // Filter names that start with "J" .filter(name -> name.startsWith("J")) // Convert each name to uppercase .map(String::toUpperCase) // Collect results into a new list .collect(Collectors.toList()); System.out.println(filteredNames); Output: [JOHN, JACOB] This example demonstrates the power of Java streams: by chaining operations together, we can achieve complex data transformations and filtering with minimal, readable code. It showcases the declarative nature of streams, where we describe what we want to achieve rather than detailing the steps to get there. What Is the peek() Method? At its core, peek() is a method provided by the Stream interface, allowing developers a glance into the elements of a stream without disrupting the flow of its operations. The signature of peek() is as follows: Stream<T> peek(Consumer<? super T> action) It accepts a Consumer functional interface, which means it performs an action on each element of the stream without altering them. The most common use case for peek() is logging the elements of a stream to understand the state of data at various points in the stream pipeline. To understand peek, let's look at a sample similar to the previous one: List<String> collected = Stream.of("apple", "banana", "cherry") .filter(s -> s.startsWith("a")) .collect(Collectors.toList()); System.out.println(collected); This code filters a list of strings, keeping only the ones that start with "a". While it's straightforward, understanding what happens during the filter operation is not visible. Debugging With peek() Now, let's incorporate peek() to gain visibility into the stream: List<String> collected = Stream.of("apple", "banana", "cherry") .peek(System.out::println) // Logs all elements .filter(s -> s.startsWith("a")) .peek(System.out::println) // Logs filtered elements .collect(Collectors.toList()); System.out.println(collected); By adding peek() both before and after the filter operation, we can see which elements are processed and how the filter impacts the stream. This visibility is invaluable for debugging, especially when the logic within the stream operations becomes complex. We can't step over stream operations with the debugger, but peek() provides a glance into the code that is normally obscured from us. Uncovering Common Bugs With peek() Filtering Issues Consider a scenario where a filter condition is not working as expected: List<String> collected = Stream.of("apple", "banana", "cherry", "Avocado") .filter(s -> s.startsWith("a")) .collect(Collectors.toList()); System.out.println(collected); Expected output might be ["apple"], but let's say we also wanted "Avocado" due to a misunderstanding of the startsWith method's behavior. Since "Avocado" is spelled with an upper case "A" this code will return false: Avocado".startsWith("a"). Using peek(), we can observe the elements that pass the filter: List<String> debugged = Stream.of("apple", "banana", "cherry", "Avocado") .peek(System.out::println) .filter(s -> s.startsWith("a")) .peek(System.out::println) .collect(Collectors.toList()); System.out.println(debugged); Large Data Sets In scenarios involving large datasets, directly printing every element in the stream to the console for debugging can quickly become impractical. It can clutter the console and make it hard to spot the relevant information. Instead, we can use peek() in a more sophisticated way to selectively collect and analyze data without causing side effects that could alter the behavior of the stream. Consider a scenario where we're processing a large dataset of transactions, and we want to debug issues related to transactions exceeding a certain threshold: class Transaction { private String id; private double amount; // Constructor, getters, and setters omitted for brevity } List<Transaction> transactions = // Imagine a large list of transactions // A placeholder for debugging information List<Transaction> highValueTransactions = new ArrayList<>(); List<Transaction> processedTransactions = transactions.stream() // Filter transactions above a threshold .filter(t -> t.getAmount() > 5000) .peek(t -> { if (t.getAmount() > 10000) { // Collect only high-value transactions for debugging highValueTransactions.add(t); } }) .collect(Collectors.toList()); // Now, we can analyze high-value transactions separately, without overloading the console System.out.println("High-value transactions count: " + highValueTransactions.size()); In this approach, peek() is used to inspect elements within the stream conditionally. High-value transactions that meet a specific criterion (e.g., amount > 10,000) are collected into a separate list for further analysis. This technique allows for targeted debugging without printing every element to the console, thereby avoiding performance degradation and clutter. Addressing Side Effects Streams shouldn't have side effects. In fact, such side effects would break the stream debugger in IntelliJ which I have discussed in the past. It's crucial to note that while collecting data for debugging within peek() avoids cluttering the console, it does introduce a side effect to the stream operation, which goes against the recommended use of streams. Streams are designed to be side-effect-free to ensure predictability and reliability, especially in parallel operations. Therefore, while the above example demonstrates a practical use of peek() for debugging, it's important to use such techniques judiciously. Ideally, this debugging strategy should be temporary and removed once the debugging session is completed to maintain the integrity of the stream's functional paradigm. Limitations and Pitfalls While peek() is undeniably a useful tool for debugging Java streams, it comes with its own set of limitations and pitfalls that developers should be aware of. Understanding these can help avoid common traps and ensure that peek() is used effectively and appropriately. Potential for Misuse in Production Code One of the primary risks associated with peek() is its potential for misuse in production code. Because peek() is intended for debugging purposes, using it to alter state or perform operations that affect the outcome of the stream can lead to unpredictable behavior. This is especially true in parallel stream operations, where the order of element processing is not guaranteed. Misusing peek() in such contexts can introduce hard-to-find bugs and undermine the declarative nature of stream processing. Performance Overhead Another consideration is the performance impact of using peek(). While it might seem innocuous, peek() can introduce a significant overhead, particularly in large or complex streams. This is because every action within peek() is executed for each element in the stream, potentially slowing down the entire pipeline. When used excessively or with complex operations, peek() can degrade performance, making it crucial to use this method judiciously and remove any peek() calls from production code after debugging is complete. Side Effects and Functional Purity As highlighted in the enhanced debugging example, peek() can be used to collect data for debugging purposes, but this introduces side effects to what should ideally be a side-effect-free operation. The functional programming paradigm, which streams are a part of, emphasizes purity and immutability. Operations should not alter state outside their scope. By using peek() to modify external state (even for debugging), you're temporarily stepping away from these principles. While this can be acceptable for short-term debugging, it's important to ensure that such uses of peek() do not find their way into production code, as they can compromise the predictability and reliability of your application. The Right Tool for the Job Finally, it's essential to recognize that peek() is not always the right tool for every debugging scenario. In some cases, other techniques such as logging within the operations themselves, using breakpoints and inspecting variables in an IDE, or writing unit tests to assert the behavior of stream operations might be more appropriate and effective. Developers should consider peek() as one tool in a broader debugging toolkit, employing it when it makes sense and opting for other strategies when they offer a clearer or more efficient path to identifying and resolving issues. Navigating the Pitfalls To navigate these pitfalls effectively: Reserve peek() strictly for temporary debugging purposes. If you have a linter as part of your CI tools, it might make sense to add a rule that blocks code from invoking peek(). Always remove peek() calls from your code before committing it to your codebase, especially for production deployments. Be mindful of performance implications and the potential introduction of side effects. Consider alternative debugging techniques that might be more suited to your specific needs or the particular issue you're investigating. By understanding and respecting these limitations and pitfalls, developers can leverage peek() to enhance their debugging practices without falling into common traps or inadvertently introducing problems into their codebases. Final Thoughts The peek() method offers a simple yet effective way to gain insights into Java stream operations, making it a valuable tool for debugging complex stream pipelines. By understanding how to use peek() effectively, developers can avoid common pitfalls and ensure their stream operations perform as intended. As with any powerful tool, the key is to use it wisely and in moderation. The true value of peek() is in debugging massive data sets, these elements are very hard to analyze even with dedicated tools. By using peek() we can dig into the said data set and understand the source of the issue programmatically.
Nicolas Fränkel
Head of Developer Advocacy,
Api7
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Andrei Tuchin
Lead Software Developer, VP,
JPMorgan & Chase
Ram Lakshmanan
yCrash - Chief Architect