Conversation
MrIron-no
commented
Oct 18, 2025
- Adds support for CAP LS 302 (incl CAP NEW / CAP DEL and cap-notify)
- Added support for SASL authentication. Authentication layer set by new netconf feature.
dcb889a to
6da057e
Compare
Introduce SASL authentication integrated with the network configuration (netconf) feature so services can drive auth settings across the mesh. Introduce IRCv3 CAP LS 302 support, including CAP NEW and CAP DEL (cap-notify), alongside the SASL client flow.
6da057e to
72f693c
Compare
ircd/s_err.c
Outdated
| { ERR_SASLFAIL, ":%s", "904" }, | ||
| /* 905 */ | ||
| { ERR_SASLTOOLONG, ":SASL message too long", "905" }, | ||
| /* 909 */ |
There was a problem hiding this comment.
Misleading comment, should be 906. There seems to be quite a few mismatching comments, won't cause errors, but could cause confusion :)
| /* 909 */ | |
| /* 906 */ |
ircd/m_sasl.c
Outdated
| if (!sasl_mechanism_supported(parv[1])) | ||
| return send_reply(cptr, RPL_SASLMECHS, netconf_str(NETCONF_SASL_MECHANISMS)); | ||
|
|
||
| cli_sasl(cptr) = ++routing_ticker; |
There was a problem hiding this comment.
Quite unlikely to happen, but if ticker wraps, it goes back to 0, which is no session?
Safer approach could be
| cli_sasl(cptr) = ++routing_ticker; | |
| if (++routing_ticker == 0) | |
| ++routing_ticker; | |
| cli_sasl(cptr) = routing_ticker; |
ircd/sasl.c
Outdated
| return NULL; | ||
|
|
||
| /* Search through all local clients */ | ||
| for (i = 0; i < MAXCONNECTIONS; i++) { |
There was a problem hiding this comment.
This is an costly O(n) operation that will always iterate configured max connections. I see one quick fix that makes it a little bit better, but it still becomes a scan operation (see suggestion below).
| for (i = 0; i < MAXCONNECTIONS; i++) { | |
| for (i = 0; i < HighestFd; i++) { |
Another approach which eliminate scans completely, would be to refactor this part into storing client pointers in a struct like:
#define SASL_SESSION_MAX 256
struct SaslSession {
uint64_t cookie;
struct Client* client;
};
static struct SaslSession sasl_sessions[SASL_SESSION_MAX];
Add two methods to add and remove from the that hash table, and modify find_sasl_client() to do a direct lookup in sasl_sessions using the cookie, which eliminate the scan completely.
ircd/m_cap.c
Outdated
| if (capab_list[i].cap == (1u << cap)) { | ||
| cap_index = i; | ||
| cap_name = capab_list[i].name; | ||
| flags = capab_list[i].flags; |
ircd/m_cap.c
Outdated
| return; | ||
|
|
||
| /* Iterate through all local clients */ | ||
| for (i = 0; i < MAXCONNECTIONS; i++) { |
There was a problem hiding this comment.
Another O(n) scan. Maybe use HighestFd instead.
ircd/m_cap.c
Outdated
| } | ||
|
|
||
| /* Iterate through all local clients */ | ||
| for (i = 0; i < MAXCONNECTIONS; i++) { |
There was a problem hiding this comment.
Another O(n) scan. Maybe use HighestFd instead.
ircd/m_sasl.c
Outdated
| static void sasl_start_timeout(struct Client* cptr) | ||
| { | ||
| struct Timer* timer; | ||
| const char* timeout_str; |
There was a problem hiding this comment.
timeout_str and timeout_seconds appear to be unused.
ircd/s_bsd.c
Outdated
| det_confs_butmask(cptr, 0); | ||
|
|
||
| /* Clean up SASL timer if it exists */ | ||
| if (cli_sasl_timer(cptr) && t_active(cli_sasl_timer(cptr))) { |
There was a problem hiding this comment.
I think this is redundant check, should be enough with:
| if (cli_sasl_timer(cptr) && t_active(cli_sasl_timer(cptr))) { | |
| if (t_active(cli_sasl_timer(cptr))) { |
| send_reply(cptr, ERR_SASLFAIL, "Authentication timed out"); | ||
|
|
||
| /* Clear SASL session */ | ||
| cli_sasl(cptr) = 0; |
There was a problem hiding this comment.
Shouldn't this also clean up it's own timer?
E.g. adding timer_del(cli_sasl_timer(cptr));
Add integration tests for PR #66 (CAP LS 302, cap-notify, SASL capability). Move pyproject.toml and uv.lock into tests/ so the Python test harness is self-contained — run with `cd tests && uv sync && uv run pytest`. Update all imports from `tests.irc_client` to `irc_client` and fix conftest.py to run docker compose from the repo root.