Fixing the DEADLOCK in handshake -> channel creation
- this bug was initially called imbris bug, as it happened on his runners and i couldn't reproduce it locally at fist :)
- When in a Handshake a seperate mpsc::Channel was created for (Cid, Frame) transport
however the protocol could already catch non handshake data any more and push in into this
mpsc::Channel.
Then this channel got dropped and a fresh one was created for the network::Channel.
These droped Frames are ofc a BUG!
I tried multiple things to solve this:
- dont create a new mpsc::Channel, but instead bind it to the Protocol itself and always use 1.
This would work theoretically, but in bParticipant side we are using 1 mpsc::Channel<(Cid, Frame)>
to handle ALL the network::channel.
If now ever Protocol would have it's own, and with that every network::Channel had it's own it would no longer work out
Bad Idea...
- using the first method but creating the mpsc::Channel inside the scheduler instead protocol neither works, as the
scheduler doesnt know the remote_pid yet
- i dont want a hack to say the protocol only listen to 2 messages and then stop no matter what
So i switched over to the simply method now:
- Do everything like before with 2 mpsc::Channels
- after the handshake. close the receiver and listen for all remaining (cid, frame) combinations
- when starting the channel, reapply them to the new sender/listener combination
- added tracing
- switched Protocol RwLock to Mutex, as it's only ever 1
- Additionally changed the layout and introduces the c2w_frame_s and w2s_cid_frame_s name schema
- Fixed a bug in scheduler which WOULD cause a DEADLOCK if handshake would fail
- fixd a but in api_send_send_main, i need to store the stream_p otherwise it's immeadiatly closed and a stream_a.send() isn't guaranteed
- add extra test to verify that a send message is received even if the Stream is already closed
- changed OutGoing to Outgoing
- fixed a bug that `metrics.tick()` was never called
- removed 2 unused nightly features and added `deny_code`
2020-06-03 07:13:00 +00:00
|
|
|
#![deny(unsafe_code)]
|
2020-06-08 09:47:39 +00:00
|
|
|
#![cfg_attr(test, deny(rust_2018_idioms))]
|
|
|
|
#![cfg_attr(test, deny(warnings))]
|
Fixing the DEADLOCK in handshake -> channel creation
- this bug was initially called imbris bug, as it happened on his runners and i couldn't reproduce it locally at fist :)
- When in a Handshake a seperate mpsc::Channel was created for (Cid, Frame) transport
however the protocol could already catch non handshake data any more and push in into this
mpsc::Channel.
Then this channel got dropped and a fresh one was created for the network::Channel.
These droped Frames are ofc a BUG!
I tried multiple things to solve this:
- dont create a new mpsc::Channel, but instead bind it to the Protocol itself and always use 1.
This would work theoretically, but in bParticipant side we are using 1 mpsc::Channel<(Cid, Frame)>
to handle ALL the network::channel.
If now ever Protocol would have it's own, and with that every network::Channel had it's own it would no longer work out
Bad Idea...
- using the first method but creating the mpsc::Channel inside the scheduler instead protocol neither works, as the
scheduler doesnt know the remote_pid yet
- i dont want a hack to say the protocol only listen to 2 messages and then stop no matter what
So i switched over to the simply method now:
- Do everything like before with 2 mpsc::Channels
- after the handshake. close the receiver and listen for all remaining (cid, frame) combinations
- when starting the channel, reapply them to the new sender/listener combination
- added tracing
- switched Protocol RwLock to Mutex, as it's only ever 1
- Additionally changed the layout and introduces the c2w_frame_s and w2s_cid_frame_s name schema
- Fixed a bug in scheduler which WOULD cause a DEADLOCK if handshake would fail
- fixd a but in api_send_send_main, i need to store the stream_p otherwise it's immeadiatly closed and a stream_a.send() isn't guaranteed
- add extra test to verify that a send message is received even if the Stream is already closed
- changed OutGoing to Outgoing
- fixed a bug that `metrics.tick()` was never called
- removed 2 unused nightly features and added `deny_code`
2020-06-03 07:13:00 +00:00
|
|
|
#![feature(try_trait, const_if_match)]
|
2020-04-24 10:56:04 +00:00
|
|
|
|
2020-05-10 02:07:46 +00:00
|
|
|
//! Crate to handle high level networking of messages with different
|
|
|
|
//! requirements and priorities over a number of protocols
|
|
|
|
//!
|
|
|
|
//! To start with the `veloren_network` crate you should focus on the 3
|
|
|
|
//! elementar structs [`Network`], [`Participant`] and [`Stream`].
|
|
|
|
//!
|
|
|
|
//! Say you have an application that wants to communicate with other application
|
|
|
|
//! over a Network or on the same computer. Now each application instances the
|
|
|
|
//! struct [`Network`] once with a new [`Pid`]. The Pid is necessary to identify
|
|
|
|
//! other [`Networks`] over the network protocols (e.g. TCP, UDP)
|
|
|
|
//!
|
2020-07-09 11:42:38 +00:00
|
|
|
//! To connect to another application, you must know it's [`ProtocolAddr`]. One
|
|
|
|
//! side will call [`connect`], the other [`connected`]. If successfull both
|
2020-07-09 07:58:21 +00:00
|
|
|
//! applications will now get a [`Participant`].
|
2020-05-10 02:07:46 +00:00
|
|
|
//!
|
|
|
|
//! This [`Participant`] represents the connection between those 2 applications.
|
2020-07-09 11:42:38 +00:00
|
|
|
//! over the respective [`ProtocolAddr`] and with it the choosen network
|
|
|
|
//! protocol. However messages can't be send directly via [`Participants`],
|
|
|
|
//! instead you must open a [`Stream`] on it. Like above, one side has to call
|
|
|
|
//! [`open`], the other [`opened`]. [`Streams`] can have a different priority
|
|
|
|
//! and [`Promises`].
|
2020-05-10 02:07:46 +00:00
|
|
|
//!
|
|
|
|
//! You can now use the [`Stream`] to [`send`] and [`recv`] in both directions.
|
|
|
|
//! You can send all kind of messages that implement [`serde`].
|
|
|
|
//! As the receiving side needs to know the format, it sometimes is useful to
|
|
|
|
//! always send a specific Enum and then handling it with a big `match`
|
|
|
|
//! statement This create makes heavily use of `async`, except for [`send`]
|
|
|
|
//! which returns always directly.
|
|
|
|
//!
|
|
|
|
//! For best practices see the `examples` folder of this crate containing useful
|
|
|
|
//! code snippets, a simple client/server below. Of course due to the async
|
|
|
|
//! nature, no strict client server separation is necessary
|
|
|
|
//!
|
|
|
|
//! # Examples
|
|
|
|
//! ```rust
|
2020-05-26 13:06:03 +00:00
|
|
|
//! use async_std::task::sleep;
|
|
|
|
//! use futures::{executor::block_on, join};
|
2020-07-09 11:42:38 +00:00
|
|
|
//! use veloren_network::{Network, Pid, ProtocolAddr, PROMISES_CONSISTENCY, PROMISES_ORDERED};
|
2020-05-10 02:07:46 +00:00
|
|
|
//!
|
2020-05-26 13:06:03 +00:00
|
|
|
//! // Client
|
|
|
|
//! async fn client() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
|
|
|
//! sleep(std::time::Duration::from_secs(1)).await; // `connect` MUST be after `listen`
|
2020-07-14 23:34:41 +00:00
|
|
|
//! let (client_network, f) = Network::new(Pid::new());
|
2020-06-08 09:47:39 +00:00
|
|
|
//! std::thread::spawn(f);
|
2020-05-26 13:06:03 +00:00
|
|
|
//! let server = client_network
|
2020-07-09 11:42:38 +00:00
|
|
|
//! .connect(ProtocolAddr::Tcp("127.0.0.1:12345".parse().unwrap()))
|
2020-05-10 02:07:46 +00:00
|
|
|
//! .await?;
|
2020-05-26 13:06:03 +00:00
|
|
|
//! let mut stream = server
|
2020-05-10 02:07:46 +00:00
|
|
|
//! .open(10, PROMISES_ORDERED | PROMISES_CONSISTENCY)
|
|
|
|
//! .await?;
|
|
|
|
//! stream.send("Hello World")?;
|
2020-05-26 13:06:03 +00:00
|
|
|
//! Ok(())
|
|
|
|
//! }
|
2020-05-10 02:07:46 +00:00
|
|
|
//!
|
|
|
|
//! // Server
|
2020-05-26 13:06:03 +00:00
|
|
|
//! async fn server() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
2020-07-14 23:34:41 +00:00
|
|
|
//! let (server_network, f) = Network::new(Pid::new());
|
2020-06-08 09:47:39 +00:00
|
|
|
//! std::thread::spawn(f);
|
2020-05-26 13:06:03 +00:00
|
|
|
//! server_network
|
2020-07-09 11:42:38 +00:00
|
|
|
//! .listen(ProtocolAddr::Tcp("127.0.0.1:12345".parse().unwrap()))
|
2020-05-10 02:07:46 +00:00
|
|
|
//! .await?;
|
2020-05-26 13:06:03 +00:00
|
|
|
//! let client = server_network.connected().await?;
|
|
|
|
//! let mut stream = client.opened().await?;
|
2020-05-10 02:07:46 +00:00
|
|
|
//! let msg: String = stream.recv().await?;
|
2020-07-05 22:13:53 +00:00
|
|
|
//! println!("Got message: {}", msg);
|
2020-05-26 13:06:03 +00:00
|
|
|
//! assert_eq!(msg, "Hello World");
|
|
|
|
//! Ok(())
|
|
|
|
//! }
|
|
|
|
//!
|
|
|
|
//! fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
|
|
|
|
//! block_on(async {
|
|
|
|
//! let (result_c, result_s) = join!(client(), server(),);
|
|
|
|
//! result_c?;
|
|
|
|
//! result_s?;
|
|
|
|
//! Ok(())
|
|
|
|
//! })
|
|
|
|
//! }
|
2020-05-10 02:07:46 +00:00
|
|
|
//! ```
|
|
|
|
//!
|
|
|
|
//! [`Network`]: crate::api::Network
|
|
|
|
//! [`Networks`]: crate::api::Network
|
|
|
|
//! [`connect`]: crate::api::Network::connect
|
|
|
|
//! [`connected`]: crate::api::Network::connected
|
|
|
|
//! [`Participant`]: crate::api::Participant
|
|
|
|
//! [`Participants`]: crate::api::Participant
|
|
|
|
//! [`open`]: crate::api::Participant::open
|
|
|
|
//! [`opened`]: crate::api::Participant::opened
|
|
|
|
//! [`Stream`]: crate::api::Stream
|
|
|
|
//! [`Streams`]: crate::api::Stream
|
|
|
|
//! [`send`]: crate::api::Stream::send
|
|
|
|
//! [`recv`]: crate::api::Stream::recv
|
|
|
|
//! [`Pid`]: crate::types::Pid
|
2020-07-09 11:42:38 +00:00
|
|
|
//! [`ProtocolAddr`]: crate::api::ProtocolAddr
|
2020-05-10 02:07:46 +00:00
|
|
|
//! [`Promises`]: crate::types::Promises
|
|
|
|
|
2019-12-20 13:56:01 +00:00
|
|
|
mod api;
|
2020-02-21 13:08:34 +00:00
|
|
|
mod channel;
|
2019-12-20 13:56:01 +00:00
|
|
|
mod message;
|
2020-07-14 23:34:41 +00:00
|
|
|
#[cfg(feature = "metrics")] mod metrics;
|
2020-03-22 13:47:21 +00:00
|
|
|
mod participant;
|
2020-03-10 00:07:36 +00:00
|
|
|
mod prios;
|
2020-04-08 14:26:42 +00:00
|
|
|
mod protocols;
|
2020-03-22 13:47:21 +00:00
|
|
|
mod scheduler;
|
2020-05-27 15:58:57 +00:00
|
|
|
#[macro_use]
|
2020-02-21 13:08:34 +00:00
|
|
|
mod types;
|
2019-12-20 13:56:01 +00:00
|
|
|
|
2020-07-09 11:42:38 +00:00
|
|
|
pub use api::{
|
|
|
|
Network, NetworkError, Participant, ParticipantError, ProtocolAddr, Stream, StreamError,
|
|
|
|
};
|
2020-04-24 10:56:04 +00:00
|
|
|
pub use message::MessageBuffer;
|
2020-03-22 13:47:21 +00:00
|
|
|
pub use types::{
|
|
|
|
Pid, Promises, PROMISES_COMPRESSED, PROMISES_CONSISTENCY, PROMISES_ENCRYPTED,
|
|
|
|
PROMISES_GUARANTEED_DELIVERY, PROMISES_NONE, PROMISES_ORDERED,
|
|
|
|
};
|