Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: added uds support for ketama && load-balancing #141

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

ZhangHanDong
Copy link

@ZhangHanDong ZhangHanDong commented Mar 16, 2024

Added UDS support for pingora-ketama and pingora-load-balancing.

> cargo test -p pingora-ketama

test result: ok. 10 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.27s

@andrewhavck @eaufavor

@ZhangHanDong
Copy link
Author

I'm currently encountering an issue with local testing and need help solving it:

Tests related to network connections all fail because I'm in China. Even setting up a global proxy doesn't work, and using proxychains4 cargo test in the terminal to test via a proxy also fails.

The error log is as follows:

---- connectors::http::v1::tests::test_connect stdout ----
thread 'connectors::http::v1::tests::test_connect' panicked at pingora-core/src/connectors/http/v1.rs:101:9:
assertion failed: reused
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- connectors::l4::tests::test_conn_error_addr_not_avail stdout ----
thread 'connectors::l4::tests::test_conn_error_addr_not_avail' panicked at pingora-core/src/connectors/l4.rs:215:32:
called `Result::unwrap_err()` on an `Ok` value: Stream { stream: BufStream { inner: BufReader { reader: BufWriter { writer: Tcp(PollEvented { io: Some(TcpStream { addr: 127.0.0.1:62346, peer: 127.0.0.1:7890, fd: 40 }) }), buffer: 0/1460, written: 0 }, buffer: 0/65536 } }, buffer_write: true, proxy_digest: None, socket_digest: Some(SocketDigest { raw_fd: 40, peer_addr: OnceCell(Some(Inet(127.0.0.1:121))), local_addr: OnceCell(Uninit) }), established_ts: SystemTime { tv_sec: 1710579265, tv_nsec: 736177000 }, tracer: None }

---- connectors::l4::tests::test_conn_error_other stdout ----
thread 'connectors::l4::tests::test_conn_error_other' panicked at pingora-core/src/connectors/l4.rs:224:33:
called `Result::unwrap_err()` on an `Ok` value: Stream { stream: BufStream { inner: BufReader { reader: BufWriter { writer: Tcp(PollEvented { io: Some(TcpStream { addr: 127.0.0.1:62348, peer: 127.0.0.1:7890, fd: 40 }) }), buffer: 0/1460, written: 0 }, buffer: 0/65536 } }, buffer_write: true, proxy_digest: None, socket_digest: Some(SocketDigest { raw_fd: 40, peer_addr: OnceCell(Some(Inet(240.0.0.1:80))), local_addr: OnceCell(Uninit) }), established_ts: SystemTime { tv_sec: 1710579265, tv_nsec: 738097000 }, tracer: None }

---- connectors::l4::tests::test_conn_error_refused stdout ----
thread 'connectors::l4::tests::test_conn_error_refused' panicked at pingora-core/src/connectors/l4.rs:199:32:
called `Result::unwrap_err()` on an `Ok` value: Stream { stream: BufStream { inner: BufReader { reader: BufWriter { writer: Tcp(PollEvented { io: Some(TcpStream { addr: 127.0.0.1:62349, peer: 127.0.0.1:7890, fd: 40 }) }), buffer: 0/1460, written: 0 }, buffer: 0/65536 } }, buffer_write: true, proxy_digest: None, socket_digest: Some(SocketDigest { raw_fd: 40, peer_addr: OnceCell(Some(Inet(127.0.0.1:79))), local_addr: OnceCell(Uninit) }), established_ts: SystemTime { tv_sec: 1710579265, tv_nsec: 738774000 }, tracer: None }

---- connectors::l4::tests::test_conn_timeout stdout ----
thread 'connectors::l4::tests::test_conn_timeout' panicked at pingora-core/src/connectors/l4.rs:235:32:
called `Result::unwrap_err()` on an `Ok` value: Stream { stream: BufStream { inner: BufReader { reader: BufWriter { writer: Tcp(PollEvented { io: Some(TcpStream { addr: 127.0.0.1:62351, peer: 127.0.0.1:7890, fd: 40 }) }), buffer: 0/1460, written: 0 }, buffer: 0/65536 } }, buffer_write: true, proxy_digest: None, socket_digest: Some(SocketDigest { raw_fd: 40, peer_addr: OnceCell(Some(Inet(192.0.2.1:79))), local_addr: OnceCell(Uninit) }), established_ts: SystemTime { tv_sec: 1710579265, tv_nsec: 739690000 }, tracer: None }

---- connectors::tests::test_conn_timeout stdout ----
thread 'connectors::tests::test_conn_timeout' panicked at pingora-core/src/connectors/mod.rs:403:22:
should throw an error

---- connectors::tests::test_conn_timeout_with_offload stdout ----
thread 'connectors::tests::test_conn_timeout_with_offload' panicked at pingora-core/src/connectors/mod.rs:403:22:
should throw an error

---- connectors::tests::test_connect stdout ----
thread 'connectors::tests::test_connect' panicked at pingora-core/src/connectors/mod.rs:380:9:
assertion failed: reused

---- connectors::tests::test_connector_bind_to stdout ----
thread 'connectors::tests::test_connector_bind_to' panicked at pingora-core/src/connectors/mod.rs:429:28:
called `Result::unwrap_err()` on an `Ok` value: Stream { stream: BufStream { inner: BufReader { reader: BufWriter { writer: Tcp(PollEvented { io: Some(TcpStream { addr: 127.0.0.1:62368, peer: 127.0.0.1:7890, fd: 40 }) }), buffer: 0/1460, written: 0 }, buffer: 0/65536 } }, buffer_write: true, proxy_digest: None, socket_digest: Some(SocketDigest { raw_fd: 40, peer_addr: OnceCell(Some(Inet(240.0.0.1:80))), local_addr: OnceCell(Uninit) }), established_ts: SystemTime { tv_sec: 1710579265, tv_nsec: 903023000 }, tracer: None }

---- connectors::tests::test_do_connect_with_total_timeout stdout ----
thread 'connectors::tests::test_do_connect_with_total_timeout' panicked at pingora-core/src/connectors/mod.rs:441:22:
should throw an error

---- connectors::tests::test_do_connect_without_total_timeout stdout ----
thread 'connectors::tests::test_do_connect_without_total_timeout' panicked at pingora-core/src/connectors/mod.rs:441:22:
should throw an error

---- connectors::tests::test_tls_connect_timeout_supersedes_total stdout ----
thread 'connectors::tests::test_tls_connect_timeout_supersedes_total' panicked at pingora-core/src/connectors/mod.rs:441:22:
should throw an error

---- listeners::l4::test::test_listen_tcp_ipv6_only stdout ----
thread 'listeners::l4::test::test_listen_tcp_ipv6_only' panicked at pingora-core/src/listeners/l4.rs:292:14:
cannot connect to v4 addr: PollEvented { io: Some(TcpStream { addr: 127.0.0.1:62377, peer: 127.0.0.1:7890, fd: 48 }) }

---- listeners::test::test_listen_tls stdout ----
thread 'listeners::test::test_listen_tls' panicked at pingora-core/src/listeners/mod.rs:245:70:
called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(7103), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, "unexpected eof while tunneling") }

---- connectors::http::tests::test_connect_h1 stdout ----
[2024-03-16T08:54:26Z ERROR pingora_core::protocols] Crit: FD mismatch: fd: 49, addr: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [1, 187, 1, 1, 1, 1], __ss_align: 0 } }, peer: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [30, 210, 127, 0, 0, 1], __ss_align: 0 } }
thread 'connectors::http::tests::test_connect_h1' panicked at pingora-core/src/connectors/http/mod.rs:155:9:
assertion failed: reused

---- connectors::tests::test_connect_tls stdout ----
[2024-03-16T08:54:26Z ERROR pingora_core::protocols] Crit: FD mismatch: fd: 44, addr: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [1, 187, 1, 1, 1, 1], __ss_align: 0 } }, peer: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [30, 210, 127, 0, 0, 1], __ss_align: 0 } }
thread 'connectors::tests::test_connect_tls' panicked at pingora-core/src/connectors/mod.rs:394:9:
assertion failed: reused

---- connectors::http::tests::test_connect_h2_fallback_h1_reuse stdout ----
[2024-03-16T08:54:26Z ERROR pingora_core::protocols] Crit: FD mismatch: fd: 42, addr: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [1, 187, 1, 1, 1, 1], __ss_align: 0 } }, peer: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [30, 210, 127, 0, 0, 1], __ss_align: 0 } }
thread 'connectors::http::tests::test_connect_h2_fallback_h1_reuse' panicked at pingora-core/src/connectors/http/mod.rs:188:9:
assertion failed: reused

---- connectors::http::tests::test_connect_prefer_h1 stdout ----
[2024-03-16T08:54:26Z ERROR pingora_core::protocols] Crit: FD mismatch: fd: 41, addr: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [1, 187, 1, 1, 1, 1], __ss_align: 0 } }, peer: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [30, 210, 127, 0, 0, 1], __ss_align: 0 } }
thread 'connectors::http::tests::test_connect_prefer_h1' panicked at pingora-core/src/connectors/http/mod.rs:215:9:
assertion failed: reused

---- connectors::http::v1::tests::test_connect_tls stdout ----
[2024-03-16T08:54:26Z ERROR pingora_core::protocols] Crit: FD mismatch: fd: 45, addr: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [1, 187, 1, 1, 1, 1], __ss_align: 0 } }, peer: SockaddrStorage { ss: sockaddr_storage { ss_len: 16, ss_family: 2, __ss_pad1: [30, 210, 127, 0, 0, 1], __ss_align: 0 } }
thread 'connectors::http::v1::tests::test_connect_tls' panicked at pingora-core/src/connectors/http/v1.rs:122:9:
assertion failed: reused


failures:
    connectors::http::tests::test_connect_h1
    connectors::http::tests::test_connect_h2_fallback_h1_reuse
    connectors::http::tests::test_connect_prefer_h1
    connectors::http::v1::tests::test_connect
    connectors::http::v1::tests::test_connect_tls
    connectors::l4::tests::test_conn_error_addr_not_avail
    connectors::l4::tests::test_conn_error_other
    connectors::l4::tests::test_conn_error_refused
    connectors::l4::tests::test_conn_timeout
    connectors::tests::test_conn_timeout
    connectors::tests::test_conn_timeout_with_offload
    connectors::tests::test_connect
    connectors::tests::test_connect_tls
    connectors::tests::test_connector_bind_to
    connectors::tests::test_do_connect_with_total_timeout
    connectors::tests::test_do_connect_without_total_timeout
    connectors::tests::test_tls_connect_timeout_supersedes_total
    listeners::l4::test::test_listen_tcp_ipv6_only
    listeners::test::test_listen_tls

test result: FAILED. 93 passed; 19 failed; 1 ignored; 0 measured; 0 filtered out; finished in 3.72s

@ZhangHanDong
Copy link
Author

I set up a local network proxy and ran some tests (cargo test), but they failed. Then, I tried running the tests on GitHub Codespace, and a different set of tests failed. I'm not sure how to proceed with testing. Can the project provide a CI (Continuous Integration) test?

@andrewhavck @eaufavor

@eaufavor eaufavor added the enhancement New feature or request label Mar 18, 2024
@eaufavor
Copy link
Member

Can the project provide a CI

We have our github action setup here.
I think you can also run that in your personal forked repo.

@ZhangHanDong
Copy link
Author

ZhangHanDong commented Mar 20, 2024

@eaufavor

Currently, CI runs smoothly under the branch of the repo I forked.

Screenshot 2024-03-20 at 20 15 17

Additionally, I've noticed that there's a certain chance of failure in TinyLfu's tests. Specifically, in the test_tiny_lfu() test, the last line assert_eq!(tiny.incr(2), 2); fails due to inequality on both sides.

This could be due to a concurrency safety bug in the current implementation of TinyLfu. Of course, this bug is unrelated to my current PR. I might initiate another PR to investigate or fix this potential bug.

@eaufavor eaufavor self-assigned this Mar 29, 2024
@gumpt
Copy link
Contributor

gumpt commented Apr 19, 2024

This could be due to a concurrency safety bug in the current implementation of TinyLfu. Of course, this bug is unrelated to my current PR. I might initiate another PR to investigate or fix this potential bug.

We've noticed this and it's unrelated to this PR. It will be fixed on main soon. EDIT: Fixed with commit 01c6965.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants