Skip to content

[FEAT] Add Dung Beetle Optimizer (DBO) to swarm_based#213

Open
ErenKayacilar wants to merge 4 commits intothieu1995:masterfrom
ErenKayacilar:feature/dung-beetle-optimizer
Open

[FEAT] Add Dung Beetle Optimizer (DBO) to swarm_based#213
ErenKayacilar wants to merge 4 commits intothieu1995:masterfrom
ErenKayacilar:feature/dung-beetle-optimizer

Conversation

@ErenKayacilar
Copy link
Copy Markdown

@ErenKayacilar ErenKayacilar commented Dec 9, 2025

This PR adds a new swarm-based optimizer: Dung Beetle Optimizer (DBO).

🔧 What’s Included

  • Implemented OriginalDBO in mealpy/swarm_based/DBO.py
  • Exposed the algorithm in mealpy/__init__.py
  • Follows the structure and coding style of existing swarm-based optimizers (e.g., COA, BFO, DMOA)
  • Behavior based on the original paper:

    Xue, J., & Shen, B. (2022). Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. The Journal of Supercomputing, 79, 7305–7336.

  • Successfully tested on the Sphere benchmark function to verify convergence and correct integration with MEALPY

Closes #


📑 Description

Adds an implementation of the Dung Beetle Optimizer inspired by the original paper and integrates it into the MEALPY swarm-based module. Ensures compatibility with the current optimizer interface, population handling structure, and evaluation pipeline.


✅ Checks

  • My pull request adheres to the code style of this project
  • My code requires changes to the documentation
  • I have updated the documentation as required
  • All local tests run successfully (basic Sphere function test)

ℹ Additional Information

No breaking changes.
The implementation is self-contained and does not modify other algorithms.

This PR adds a new swarm-based optimizer: **Dung Beetle Optimizer (DBO)**.

- Implemented `OriginalDBO` in `mealpy/swarm_based/DBO.py`
- Exposed the algorithm in `mealpy/__init__.py`
- The implementation follows the structure and coding style of existing swarm-based optimizers (e.g., COA, BFO).
- Behavior is based on the original paper:

  Xue, J., & Shen, B. (2022). Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. *The Journal of Supercomputing*, 79, 7305–7336.

- Tested on a simple Sphere function (minimization) to verify convergence and correct integration with MEALPY.
Copy link
Copy Markdown
Owner

@thieu1995 thieu1995 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are many more problems I can see in the code. I will update the code for better performance.

Generate an empty agent (without target).

DBO does not require any additional attributes per agent,
therefore only the solution vector is stored.
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should delete this function. If you don't have any custom property in agent.

epoch (int): The current iteration.
"""
# Make sure previous positions array is aligned with the population
if self._prev_positions is None or len(self._prev_positions) != self.pop_size:
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for this condition. It will always the same size.

[agent.solution.copy() for agent in self.pop]
)

pop_array = np.array([agent.solution.copy() for agent in self.pop])
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for copy() operator here. Because you don't change the pop_array, you just use it.

)

pop_array = np.array([agent.solution.copy() for agent in self.pop])
prev_array = self._prev_positions.copy()
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for this array. In fact, you can just use 1 variable, defined it as self.prev_pos. No need to use copy() and extra prev_array (wasted memory).

pop_array = np.array([agent.solution.copy() for agent in self.pop])
prev_array = self._prev_positions.copy()

# Global best / worst positions (bestX and worstX in the paper)
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete all of this block. You can use this function. https://github.com/thieu1995/mealpy/blob/master/mealpy/optimizer.py#L511

Copy link
Copy Markdown
Collaborator

@anh9895 anh9895 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix all the comments.

self.pop + pop_new, self.pop_size, self.problem.minmax
)

# Update previous positions x(t−1) for the next iteration
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code should be appeared before the "Merge old and new populations".
If not, the self._prev_positions and self.pop is always the same.

79, 7305–7336.
"""

def __init__(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Write code inline, not break line like this please.

@ErenKayacilar
Copy link
Copy Markdown
Author

Hi @thieu1995 and @anh9895,

First of all, I sincerely apologize for the long delay in my response. I had some personal/academic commitments that kept me away from this project for a while.

Thank you so much for the detailed feedback and code reviews. I've carefully reviewed all your comments regarding memory optimization, code structure, and the positioning of the population updates.

I am still very much interested in contributing the Dung Beetle Optimizer to Mealpy. I will start implementing the requested changes and resolve the merge conflicts as soon as possible.

Thanks for your patience

Removed custom generate_empty_agent() override since DBO has no additional agent properties; parent class method is sufficient.

Removed unused Agent import.

Removed unnecessary _prev_positions size check in 
evolve()  — population size is always consistent.

Removed redundant copy() calls on pop_array and g_best/g_worst since they are only read, not mutated.

Eliminated extra prev_array variable; using self._prev_positions directly to avoid wasted memory.

Replaced manual best/worst agent logic with built-in self.get_worst_agent() from the Optimizer base class.

Moved _prev_positions update to before the merge step so it correctly captures the current population state, not the already-merged one.

Reformatted init
 signature to inline style as per code style guidelines.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants