Other Bets Props and Futures Some other fun bets that can be made on basketball include prop bets and futures. How To Bet News. Handicapping Your Basketball Bets When oddsmakers set the lines, they take many factors into consideration. If you have even one loss, you lose the entire bet. On the other hand the Magic must either win outright or lose by 3 or fewer points for a Magic spread bet to payout.
In a world that is rife with silos and walls, it is a testament to the ethos of the IETF that it enables this rare environment where we collaborate freely with our competitors to make a better internet for everyone. Additionally, it would have anti-ossification characteristics to preserve the malleability of the protocol for the future.
This meant strong encryption of as much of the protocol as possible. This also meant that the network that carried these packets would not be privy to most information in QUIC packet headers. Protocol features would be checked on a scale. If the complexity of a protocol feature or its implementation outweighed its benefits, it would be weeded out. Maintenance is a massive hidden cost; implementers have little patience for maintaining code that does not yield concomitant benefits.
Simplicity was going to be difficult, but it was a core value. The handshake Right from the beginning, one of the big tasks of the working group was to reconcile the cryptographic handshake with other open standards. TLS 1. The group knew that using TLS 1. The working group finally arrived at the current design , which can be understood as follows.
TLS is a two-layer protocol consisting of a handshake protocol and a record layer. Traditionally, this layering has been considered to be internal to the TLS protocol. However, since other applications might not use TLS for security, it remains possible to use a different cryptographic handshake with QUIC. The transport draft outlines the features needed of such a protocol , and recent work by researchers have provided existence proof of this possibility.
But some bits, including the packet number and key phase bits, remained in plain text. We wanted to encrypt those, too, but we seemed to be at a technical dead end. The packet number in QUIC was used for both reliability and as a nonce non-repeating value for encrypting the packet.
Packet numbers were monotonically increasing to enable better loss detection and compression. Trivially decodable packet numbers were a problem because, similar to the connection ID, packet numbers could be used to correlate a connection moving across networks. The initial solution involved a number of strategies to enable a client to do random packet number jumps when a client moved across networks.
But these strategies were riddled with complexity, and new weaknesses were repeatedly discovered. Additionally, exposing packet numbers might lead to their ossification by network middleboxes, limiting their evolution. Encrypting the packet number was a clean solution, but doing so required another nonce. This nonce would have to be communicated in the header, increasing header overhead of the packet. The working group realized that encrypted text is cryptographically random and could, therefore, be used as a nonce.
This meant that the packet already carried a nonce that could be used for packet number encryption! This insight, along with the recognition that packet numbers did not need the same strength of protection as the rest of the packet did, allowed the creation of a two-step encryption process: encrypt the packet first using the packet number as a nonce, and then encrypt the packet number using some of the encrypted packet as a nonce and a different key.
QUIC now encrypts most of the bits in the header using this strategy. Packet headers The early header format slowly evolved, eventually becoming an unwieldy header with a large number of fields and constraints. The group achieved a breakthrough when it was realized that the header fields could be split into two groups.
The packets used for connection establishment needed to communicate several bits of information, but once the connection was established, only some key headers were necessary. As a result, long and short header formats were created. The long header was structured to be expressive and extensible so that connection establishment could happen with ease and could be extended in the future.
The short header was designed to be efficient, since most packets in a connection are expected to carry this header. After connection establishment, QUIC uses short packet headers that can be as small as four bytes. Connection IDs A long-standing problem in transport protocols is that connections are identified by the four tuple of client and server IP address and port number. Despite efforts, previous solutions have all eluded wide deployment. QUIC could solve this problem once and for all.
It was to be used in lieu of the standard IP address and port number tuple. The Connection ID was retained through IP address changes at the client so that the connection could continue uninterrupted as a client migrated over to a new network attachment point — a client moving from a WiFi network to a cellular network, for instance.
After several iterations, this design was eventually replaced by the use of two variable-length Connection IDs , one in each direction chosen by the corresponding endpoint. The group also built mechanisms for both endpoints to change their Connection IDs mid-connection. This allowed a migrating client to move across networks without breaking the connection, and enabled it to change the Connection IDs while doing so to avoid any privacy leakage.
This new design posed several challenges, especially in ensuring routing stability around Connection ID communication and changes, and these were eventually resolved. Connection migration is an exciting new feature in QUIC, and we look forward to seeing it be used by applications in practice. It uses the parallelism of QUIC streams to avoid head-of-line blocking. This guarantees that both endpoints agree on how much flow control credit was consumed by the sender on that stream.
The receiver MUST use the final size of the stream to account for all bytes sent on the stream in its connection-level flow controller. Generating these errors is not mandatory, because requiring that an endpoint generate these errors also means that the endpoint needs to maintain the final size state for closed streams, which could mean a significant state commitment.
Controlling Concurrency An endpoint limits the cumulative number of incoming streams a peer can open. Initial limits are set in the transport parameters; see Section Separate limits apply to unidirectional and bidirectional streams. Implementations might choose to increase limits as streams are closed, to keep the number of streams available to peers roughly consistent. This signal is considered useful for debugging. The handshake Section 7 confirms that both endpoints are willing to communicate Section 8.
However, 0-RTT provides no protection against replay attacks; see Section 9. A server can also send application data to a client before it receives the final cryptographic handshake messages that allow it to confirm the identity and liveness of the client. These capabilities allow an application protocol to offer the option of trading some security guarantees for reduced latency.
Section 9 describes mitigations for the security and privacy issues associated with migration. Connection ID Each connection possesses a set of connection identifiers, or connection IDs, each of which can identify the connection. Connection IDs are independently selected by endpoints; each endpoint selects the connection IDs that its peer uses.
Each endpoint selects connection IDs using an implementation-specific and perhaps deployment-specific method that will allow packets with that connection ID to be routed back to the endpoint and to be identified by the endpoint upon receipt. These fields are used to set the connection IDs for new connections; see Section 7. The length of the Destination Connection ID field is expected to be known to endpoints. Endpoints using a load balancer that routes based on connection ID could agree with the load balancer on a fixed length for connection IDs or agree on an encoding scheme.
A fixed portion could encode an explicit length, which allows the entire connection ID to vary in length and still be used by the load balancer. However, multiplexing connections on the same local IP address and port while using zero-length connection IDs will cause failures in the presence of peer connection migration, NAT rebinding, and client port reuse. The sequence number of the initial connection ID is 0.
The connection ID that a client selects for the first Destination Connection ID field it sends and any connection ID provided by a Retry packet are not assigned sequence numbers. Connection IDs that are issued and not retired are considered active; any active connection ID is valid for use with the current connection at any time, in any packet type.
An endpoint MAY limit the total number of connection IDs issued for each connection to avoid the risk of running out of connection IDs; see Section An endpoint MAY also limit the issuance of connection IDs to reduce the amount of per-path state it maintains, such as path validation status, as its peer might interact with it over as many paths as there are issued connection IDs.
A zero-length Destination Connection ID field is used in all packets sent toward such an endpoint over any network path. Consuming and Retiring Connection IDs An endpoint can change the connection ID it uses for a peer to another available one at any time during the connection. An endpoint consumes connection IDs in response to a migrating peer; see Section 9.
Failure to cease using the connection IDs when requested can result in connection failures, as the issuing endpoint might be unable to continue using the connection IDs with the active connection. Matching Packets to Connections Incoming packets are classified on receipt. Packets can either be associated with an existing connection or -- for servers -- potentially create a new connection. Note that more than one connection ID can be associated with a connection; see Section 5.
An endpoint can use just destination IP and port or both source and destination addresses for identification, though this makes connections fragile as described in Section 5. A Stateless Reset allows a peer to more quickly identify when a connection becomes unusable.
For example, packets are discarded if they indicate a different protocol version than that of the connection or if the removal of packet protection is unsuccessful once the expected keys are available. An endpoint MUST generate a connection error if processing the contents of these packets prior to discovering an error, or fully revert any changes made during that processing.
Client Packet Handling Valid packets sent to clients always include a Destination Connection ID that matches a value the client selects. Clients that choose to receive zero-length connection IDs can use the local address and port to identify a connection. Packets that do not match an existing connection -- based on Destination Connection ID or, if this value is zero length, local IP address and port -- are discarded.
The client MAY drop these packets, or it MAY buffer them in anticipation of later packets that allow it to compute the key. Server Packet Handling If a server receives a packet that indicates an unsupported version and if the packet is large enough to initiate a new connection for any supported version, the server SHOULD send a Version Negotiation packet as described in Section 6.
Servers MUST drop smaller packets that specify unsupported versions. In particular, different packet protection keys might be used for different versions. Servers that do not support a particular version are unlikely to be able to decrypt the payload of the packet or properly interpret the result. These packets are processed using the selected connection; otherwise, the server continues as described below. This commits the server to the version that the client selected. Considerations for Simple Load Balancers A server deployment could load-balance among servers using only source and destination IP addresses and ports.
Changes to the client's IP address or port could result in packets being forwarded to the wrong server. Such a server deployment could use one of the following methods for connection continuity when a client's address changes. Note that clients could choose not to use the preferred address. An application protocol can assume that an implementation of QUIC provides an interface that includes the operations described in this section.
An implementation designed for use with a specific application protocol might provide only those operations that are used by that protocol. Version Negotiation Version negotiation allows a server to indicate that it does not support the version the client used. A server sends a Version Negotiation packet in response to each packet that might initiate a new connection; see Section 5. This ensures that the server responds if there is a mutually supported version.
A server might not send a Version Negotiation packet if the datagram it receives is smaller than the minimum size specified in a different version; see Section Sending Version Negotiation Packets If the version selected by the client is not acceptable to the server, the server responds with a Version Negotiation packet; see Section This includes a list of versions that the server will accept. Though either the Initial packet or the Version Negotiation packet that is sent in response could be lost, the client will send new packets until it successfully receives a response or it abandons the connection attempt.
For instance, a server that is able to recognize packets as 0-RTT might choose not to send Version Negotiation packets in response to 0-RTT packets with the expectation that it will eventually receive an Initial packet. Future Standards Track specifications might change how implementations that support multiple versions of QUIC react to Version Negotiation packets received in response to an attempt to establish a connection using this version.
A client MUST discard any Version Negotiation packet if it has received and successfully processed any other packet, including an earlier Version Negotiation packet.
Assuming that every contiguous byte on the stream was sent once, the final size is the number of bytes sent. More generally, this is one higher than the offset of the byte with the largest offset sent on the stream, or zero if no bytes were sent.
This guarantees that both endpoints agree on how much flow control credit was consumed by the sender on that stream. The receiver MUST use the final size of the stream to account for all bytes sent on the stream in its connection-level flow controller. Generating these errors is not mandatory, because requiring that an endpoint generate these errors also means that the endpoint needs to maintain the final size state for closed streams, which could mean a significant state commitment.
Controlling Concurrency An endpoint limits the cumulative number of incoming streams a peer can open. Initial limits are set in the transport parameters; see Section Separate limits apply to unidirectional and bidirectional streams. Implementations might choose to increase limits as streams are closed, to keep the number of streams available to peers roughly consistent. This signal is considered useful for debugging.
The handshake Section 7 confirms that both endpoints are willing to communicate Section 8. However, 0-RTT provides no protection against replay attacks; see Section 9. A server can also send application data to a client before it receives the final cryptographic handshake messages that allow it to confirm the identity and liveness of the client.
These capabilities allow an application protocol to offer the option of trading some security guarantees for reduced latency. Section 9 describes mitigations for the security and privacy issues associated with migration. Connection ID Each connection possesses a set of connection identifiers, or connection IDs, each of which can identify the connection.
Connection IDs are independently selected by endpoints; each endpoint selects the connection IDs that its peer uses. Each endpoint selects connection IDs using an implementation-specific and perhaps deployment-specific method that will allow packets with that connection ID to be routed back to the endpoint and to be identified by the endpoint upon receipt. These fields are used to set the connection IDs for new connections; see Section 7. The length of the Destination Connection ID field is expected to be known to endpoints.
Endpoints using a load balancer that routes based on connection ID could agree with the load balancer on a fixed length for connection IDs or agree on an encoding scheme. A fixed portion could encode an explicit length, which allows the entire connection ID to vary in length and still be used by the load balancer.
However, multiplexing connections on the same local IP address and port while using zero-length connection IDs will cause failures in the presence of peer connection migration, NAT rebinding, and client port reuse. The sequence number of the initial connection ID is 0. The connection ID that a client selects for the first Destination Connection ID field it sends and any connection ID provided by a Retry packet are not assigned sequence numbers. Connection IDs that are issued and not retired are considered active; any active connection ID is valid for use with the current connection at any time, in any packet type.
An endpoint MAY limit the total number of connection IDs issued for each connection to avoid the risk of running out of connection IDs; see Section An endpoint MAY also limit the issuance of connection IDs to reduce the amount of per-path state it maintains, such as path validation status, as its peer might interact with it over as many paths as there are issued connection IDs. A zero-length Destination Connection ID field is used in all packets sent toward such an endpoint over any network path.
Consuming and Retiring Connection IDs An endpoint can change the connection ID it uses for a peer to another available one at any time during the connection. An endpoint consumes connection IDs in response to a migrating peer; see Section 9. Failure to cease using the connection IDs when requested can result in connection failures, as the issuing endpoint might be unable to continue using the connection IDs with the active connection.
Matching Packets to Connections Incoming packets are classified on receipt. Packets can either be associated with an existing connection or -- for servers -- potentially create a new connection. Note that more than one connection ID can be associated with a connection; see Section 5. An endpoint can use just destination IP and port or both source and destination addresses for identification, though this makes connections fragile as described in Section 5.
A Stateless Reset allows a peer to more quickly identify when a connection becomes unusable. For example, packets are discarded if they indicate a different protocol version than that of the connection or if the removal of packet protection is unsuccessful once the expected keys are available.
An endpoint MUST generate a connection error if processing the contents of these packets prior to discovering an error, or fully revert any changes made during that processing. Client Packet Handling Valid packets sent to clients always include a Destination Connection ID that matches a value the client selects. Clients that choose to receive zero-length connection IDs can use the local address and port to identify a connection.
Packets that do not match an existing connection -- based on Destination Connection ID or, if this value is zero length, local IP address and port -- are discarded. The client MAY drop these packets, or it MAY buffer them in anticipation of later packets that allow it to compute the key. Server Packet Handling If a server receives a packet that indicates an unsupported version and if the packet is large enough to initiate a new connection for any supported version, the server SHOULD send a Version Negotiation packet as described in Section 6.
Servers MUST drop smaller packets that specify unsupported versions. In particular, different packet protection keys might be used for different versions. Servers that do not support a particular version are unlikely to be able to decrypt the payload of the packet or properly interpret the result.
These packets are processed using the selected connection; otherwise, the server continues as described below. This commits the server to the version that the client selected. Considerations for Simple Load Balancers A server deployment could load-balance among servers using only source and destination IP addresses and ports. Changes to the client's IP address or port could result in packets being forwarded to the wrong server.
Such a server deployment could use one of the following methods for connection continuity when a client's address changes. Note that clients could choose not to use the preferred address. An application protocol can assume that an implementation of QUIC provides an interface that includes the operations described in this section.
An implementation designed for use with a specific application protocol might provide only those operations that are used by that protocol. Version Negotiation Version negotiation allows a server to indicate that it does not support the version the client used.
A server sends a Version Negotiation packet in response to each packet that might initiate a new connection; see Section 5. This ensures that the server responds if there is a mutually supported version. A server might not send a Version Negotiation packet if the datagram it receives is smaller than the minimum size specified in a different version; see Section Sending Version Negotiation Packets If the version selected by the client is not acceptable to the server, the server responds with a Version Negotiation packet; see Section This includes a list of versions that the server will accept.
Though either the Initial packet or the Version Negotiation packet that is sent in response could be lost, the client will send new packets until it successfully receives a response or it abandons the connection attempt. For instance, a server that is able to recognize packets as 0-RTT might choose not to send Version Negotiation packets in response to 0-RTT packets with the expectation that it will eventually receive an Initial packet. The server's Diffie Hellman value is found in the server config and the client provides one in its first handshake message.
Because the server config must be kept for some time several days in order to allow 0-RTT handshakes leak risk , immediately upon receiving the connection, the server replies with an ephemeral Diffie-Hellman value and the connection is rekeyed. The server needs only the following to process QUIC connections: The static server config value server don't need the private key for the certificate The Diffie-Hellman private value The private key for the certificate need never be placed on the server.
A form of short-lived certificates can be implemented by signing short-lived server configs and installing only on those server. These messages have a uniform, key-value format. The requirement that the tags be strictly monotonic also removes any ambiguity around duplicate tags. Client Handshake Initially the client knows nothing about the server. Before a handshake can be attempted the client will send inchoate client hello messages to elicit a server config and proof of authenticity from the server.
To perform 0-RTT handshake, the client needs to have a server config that has been verified to be authentic. SHLO message indicates a successfull handshake and can never result from an inchoate CHLO as it doesn't contain enough information to perform a handshake. Server Config The server config contains the serialised preferences for the server and takes the form of a handshake message with tag SCFG.
The first four bytes of the IV are taken from the key derivation and the last eight are the packet sequence number. S20P : Salsa20 with Poly ORBT Orbit: an 8-byte, opaque value that identifies the strike-register. VERS Versions: the list of version tags supported by the server. The underlying QUIC packet protocol has a version negotiation. A full client hello contains the same tags as an inchoate client hello, with the addition of several others: SCID Server config ID: the ID of the server config that the client is using.
KEXS Key exchange: the tag of the key exchange algorithm to be used. SNO Server nonce optional : an echoed server nonce, if the server has provided one. This message will contain further, encrypted tag-value pairs that specify client certificates, ChannelIDs etc. After sending a full client hello, the client is in possession of non-forward-secure keys for the connection since it can calculate the shared value from the server config and the public value in PUBS.
For details of the key derivation , see below. These keys are called the initial keys as opposed the the forward-secure keys that come later and the client should encrypt future packets with these keys. It should also configure the packet processing to accept packets encrypted with these keys in a latching fashion: once an encrypted packet has been received, no further unencrypted packets should be accepted.
At this point, the client is free to start sending application data to the server. Retransmission of data occurs at a layer below the handshake layer, however that layer must still be aware of the change of encryption. New packets must be transmitted using the initial keys but, if the client hello needs to be retransmitted, then it must be retransmitted in the clear. The packet sending layer must be aware of which security level was originally used to send any given packet and be careful not to use a higher security level unless the peer has acknowledged possession of those keys i.
The server will either accept or reject the handshake.