Skip to content

NevarokML: Reinforcement Learning

NevarokML is a powerful machine learning plugin for Unreal Engine that allows developers to implement reinforcement learning (RL) capabilities within their projects. By creating a custom environment and agent, developers can train RL models to optimize agent behavior and accomplish specific tasks. This documentation page will guide you through the process of setting up and performing reinforcement learning using NevarokML in Unreal Engine.


Prerequisites

Before you begin, make sure you have the NevarokML plugin installed in your Unreal Engine project. Follow the installation instructions to integrate it into your project.

Creating the RL Environment

Creating a Blueprint Derived from ANevarokMLEnv

To create the RL environment, you need to create a Blueprint class that derives from ANevarokMLEnv. This Blueprint will define the interactions between the agent and the environment.

  1. In the Content Browser, right-click and create a new Blueprint Class.
  2. Search for "NevarokML" and select "ANevarokMLEnv" as the parent class.
  3. Name the Blueprint class, e.g., "BP_BasicEnv," and click "Create."

create-env

Creation of Blueprint class derived from ANevarokMLEnv

Implementing OnInit, OnStep, and OnReset Events

In the BP_BasicEnv Blueprint, you need to implement the OnInit, OnStep, and OnReset events. These events define how the environment initializes, updates during each time step, and resets when an episode is complete.

  1. Open the BP_BasicEnv Blueprint in the Blueprint Editor.
  2. In the Event Graph, create Event Graph nodes for OnInit, OnStep, and OnReset.
  3. Implement the functionality for each event based on your RL environment's specific requirements.
  4. Use NevarokML API to update observations, perform actions, add rewards, and check for episode completion.

env-events

ANevarokMLEnv Event Graph nodes: OnInit, OnStep, and OnReset

env-graph

ANevarokMLEnv Event Graph example

Creating the RL Trainer

Creating a Blueprint Derived from ANevarokMLTrainer

To create the RL trainer, you need to create a Blueprint class that derives from ANevarokMLTrainer. This Blueprint will define the learning process and control the RL environments.

  1. In the Content Browser, right-click and create a new Blueprint Class.
  2. Search for "NevarokML" and select "ANevarokMLTrainer" as the parent class.
  3. Name the Blueprint class, e.g., "BP_BasicTrainer," and click "Create."

create-trainer

Creation of Blueprint class derived from ANevarokMLTrainer

Implementing OnConstruct and OnStart Events

In the BP_BasicTrainer Blueprint, you need to implement the OnConstruct and OnStart events. The OnConstruct event sets up the agent's observation and action spaces, while the OnStart event starts the learning process.

  1. Open the BP_BasicTrainer Blueprint in the Blueprint Editor.
  2. In the Event Graph, create Event Graph nodes for OnConstruct and OnStart.
  3. In the OnConstruct event, use NevarokML API to update the agent's observation and action spaces according to your needs.
  4. In the OnStart event, call the Learn function to initiate the learning process.
  5. Use the NevarokML API to create the preferred RL algorithm (PPO, DQN, etc.) and set algorithm parameters.

on-construct-example

ANevarokMLTrainer OnConstruct example

on-start-example

ANevarokMLTrainer OnStart example

Training the RL Agent

Choosing an RL Algorithm

Select the RL algorithm that best fits your RL environment and requirements. NevarokML supports various RL algorithms that can be configured to suit different tasks.

Configuring the Algorithm

Use the NevarokML API to configure the chosen RL algorithm within the BP_BasicTrainer Blueprint. Set the hyperparameters and any other specific configurations needed for the algorithm.

Running the Learning Process

Placing the Actors in the Level

In the Level, place the BP_BasicTrainer derived Actor and the BP_BasicEnv derived Actor.

  1. Drag and drop the BP_BasicTrainer derived Actor from the Content Browser into the Level.
  2. Similarly, place the BP_BasicEnv derived Actor in the Level.
  3. Add placed in Level BP_BasicEnv reference to the BP_BasicTrainer.

trainer-setup

Referencing BP_BasicEnv in BP_BasicTrainer example

Starting the Learning Process

After placing the Actors in the Level, press the Play button to start the learning process. The RL agent will interact with the environment, collect experiences, and update its policy during training.

run-training

Learning Process example

tensorboard-log

TensorBoard Logging

Importing the Trained Model

Upon completion of the training process, you can import the trained RL model using the regular Unreal Engine import flow. This allows you to deploy the trained agent in your Unreal Engine project for testing and deployment.

Conclusion

Congratulations! You have successfully set up and implemented reinforcement learning in Unreal Engine using NevarokML. By creating custom environments and agents and configuring RL algorithms, you can train intelligent agents for various tasks and scenarios.

For more in-depth information on NevarokML's features and capabilities, refer to the official NevarokML documentation and additional resources.