Post Sockets, An Abstract Programming Interface for the Transport LayerETH ZurichGloriastrasse 358092 ZurichSwitzerlandietf@trammell.chUniversity of GlasgowSchool of Computing ScienceGlasgow G12 8QQUnited Kingdomcsp@csperkins.orgApple Inc.1 Infinite LoopCupertino, California 95014United States of Americatpauly@apple.comETH ZurichGloriastrasse 358092 ZurichSwitzerlandmirja.kuehlewind@tik.ee.ethz.ch
Transport
TAPS Working GroupInternet-DraftThis document describes Post Sockets, an asynchronous abstract programming
interface for the atomic transmission of messages in an inherently multipath
environment. Post replaces connections with long-lived associations between
endpoints, with the possibility to cache cryptographic state in order to
reduce amortized connection latency. We present this abstract interface as an
illustration of what is possible with present developments in transport
protocols when freed from the strictures of the current sockets API.The BSD Unix Sockets API’s SOCK_STREAM abstraction, by bringing network
sockets into the UNIX programming model, allowing anyone who knew how to write
programs that dealt with sequential-access files to also write network
applications, was a revolution in simplicity. It would not be an overstatement
to say that this simple API is the reason the Internet won the protocol wars
of the 1980s. SOCK_STREAM is tied to the Transmission Control Protocol (TCP),
specified in 1981 . TCP has scaled remarkably well over the past
three and a half decades, but its total ubiquity has hidden an uncomfortable
fact: the network is not really a file, and stream abstractions are too
simplistic for many modern application programming models.In the meantime, the nature of Internet access, and the variety of Internet
transport protocols, is evolving. The challenges that new protocols and access
paradigms present to the sockets API and to programming models based on them
inspire the design elements of a new approachMany end-user devices are connected to the Internet via multiple interfaces,
which suggests it is time to promote the paths by which two endpoints are
connected to each other to a first-order object. While implicit multipath
communication is available for these multihomed nodes in the present Internet
architecture with the Multipath TCP extension (MPTCP) , MPTCP was
specifically designed to hide multipath communication from the application for
purposes of compatibility. Since many multihomed nodes are connected to the
Internet through access paths with widely different properties with respect to
bandwidth, latency and cost, adding explicit path control to MPTCP’s API would
be useful in many situations. Applications also need control over cooperation
with path elements via mechanisms such as that proposed by the Path Layer UDP
Substrate (PLUS) effort (see and
).Another trend straining the traditional layering of the transport stack
associated with the SOCK_STREAM interface is the widespread interest in
ubiquitous deployment of encryption to guarantee confidentiality,
authenticity, and integrity, in the face of pervasive surveillance
. Layering the most widely deployed encryption technology,
Transport Layer Security (TLS), strictly atop TCP (i.e., via a TLS library
such as OpenSSL that uses the sockets API) requires the encryption-layer
handshake to happen after the transport-layer handshake, which increases
connection setup latency on the order of one or two round-trip times, an
unacceptable delay for many applications. Integrating cryptographic state
setup and maintenance into the path abstraction naturally complements efforts
in new protocols (e.g. QUIC ) to
mitigate this strict layering.To meet these challenges, we present the Post-Socket Application Programming
Interface (API), described in detail in this work. Post is designed to be
language, transport protocol, and architecture independent, allowing
applications to be written to a common abstract interface, easily ported among
different platforms, and used even in environments where transport protocol
selection may be done dynamically, as proposed in the IETF’s Transport Services
working group.Post replaces the traditional SOCK_STREAM abstraction with an Message
abstraction, which can be seen as a generalization of the Stream Control
Transmission Protocol’s SOCK_SEQPACKET service. Messages are sent
and received on Carriers, which logically group Messages for transmission and
reception. For backward compatibility, these Carriers can also be opened as
Streams, presenting a file-like interface to the network as with SOCK_STREAM.Post replaces the notions of a socket address and connected
socket with an Association with a remote endpoint via set of Paths.
Implementation and wire format for transport protocol(s) implementing the Post
API are explicitly out of scope for this work; these abstractions need not map
directly to implementation-level concepts, and indeed with various amounts of
shimming and glue could be implemented with varying success atop any
sufficiently flexible transport protocol.The key features of Post as compared with the existing sockets API are:Explicit Message orientation, with framing and atomicity guarantees for
Message transmission.Asynchronous reception, allowing all receiver-side interactions to be
event-driven.Explicit support for multistreaming and multipath transport protocols and
network architectures.Long-lived Associations, whose lifetimes may not be bound to underlying
transport connections. This allows associations to cache state and
cryptographic key material to enable fast resumption of communication, and for
the implementation of the API to explicitly take care of connection establishment
mechanics such as connection racing and peer-to-peer rendezvous
.Transport protocol stack independence, allowing applications to be written
in terms of the semantics best for the application’s own design, separate
from the protocol(s) used on the wire to achieve them. This enables
applications written to a single API to make use of transport protocols in
terms of the features they provide, as in .This work is the synthesis of many years of Internet transport protocol
research and development. It is inspired by concepts from the Stream Control
Transmission Protocol (SCTP) , TCP Minion ,
and MinimaLT, among other transport protocol
modernization efforts. We present Post Sockets as an illustration of what is
possible with present developments in transport protocols when freed from the
strictures of the current sockets API. While much of the work for building
parts of the protocols needed to implement Post are already ongoing in other
IETF working groups (e.g. MPTCP, QUIC, TLS), we argue that an abstract
programming interface unifying access all these efforts is necessary to fully
exploit their potential.Post is based on a small set of abstractions, centered around a Message Carrier
as the entry point for an application to the networking API.
The relationships among them are shown in Figure and detailed in this section.A Message Carrier (or simply Carrier) is a transport protocol stack-independent
interface for sending and receiving messages between an
application and a remote endpoint; it is roughly analogous to a socket in the
present sockets API.Sending a Message over a Carrier is driven by the application, while receipt
is driven by the arrival of the last packet that allows the Message to be
assembled, decrypted, and passed to the application. Receipt is therefore
asynchronous; given the different models for asynchronous I/O and concurrency
supported by different platforms, it may be implemented in any number of ways.
The abstract API provides only for a way for the application to register
how it wants to handle incoming messages.All the Messages sent to a Message Carrier will be received on the
corresponding Message Carrier at the remote endpoint, though not necessarily
reliably or in order, depending on Message properties and the underlying
transport protocol stack.A Message Carrier that is backed by current transport protocol stack state
(such as a TCP connection; see ) is said to be “active”: messages
can be sent and received over it. A Message Carrier can also be “dormant”:
there is long-term state associated with it (via the underlying Association;
see ), and it may be able to reactivated, but messages cannot
be sent and received immediately.If supported by the underlying transport protocol stack, a Message Carrier may
be forked: creating a new Message Carrier associated with a new Message
Carrier at the same remote endpoint. The semantics of the usage of multiple
Message Carriers based on the same Association are application-specific. When a
Message Carrier is forked, its corresponding Message Carrier at the remote
endpoint receives a fork request, which it must accept in order to fully
establish the new carrier. Multiple message carriers between endpoints are
implemented differently by different transport protocol stacks, either using
multiple separate transport-layer connections, or using multiple streams of
multistreaming transport protocols.To exchange messages with a given remote endpoint, an application may initiate
a Message Carrier given its remote (see and local (see )
identities; this is an equivalent to an active open. There are five special
cases of Message Carriers, as well, supporting different initiation and
interaction patterns, defined in the subsections below.A Listener is a special case of Message Carrier which only responds to
requests to create a new Carrier from a remote endpoint, analogous to a server
or listening socket in the present sockets API. Instead of being bound to a
specific remote endpoint, it is bound only to a local identity; however, its
interface for accepting fork requests is identical to that for fully fledged
Message Carriers.A Source is a special case of Message Carrier over which messages can only be
sent, intended for unidirectional applications such as multicast transmitters.
Sources cannot be forked, and need not accept forks.A Sink is a special case of Message Carrier over which messages can only be
received, intended for unidirectional applications such as multicast
receivers. Sinks cannot be forked, and need not accept forks.A Responder is a special case of Message Carrier which may receive messages
from many remote sources, for cases in which an application will only ever
send Messages in reply back to the source from which a Message was received.
This is a common implementation pattern for servers in client-server
applications. A Responder’s receiver gets a Message, as well as a Source to
send replies to. Responders cannot be forked, and need not accept forks.A Message Carrier may be irreversibly morphed into a Stream, in order to provide
a strictly ordered, reliable service as with SOCK_STREAM. Morphing a Message
Carrier into a Stream should return a “file-like object” as appropriate for the
platform implementing the API. Typically, both ends of a communication using a
stream service will morph their respective Message Carriers independently before
sending any Messages.Writing a byte to a Stream will cause it to be received by the remote, in
order, or will cause an error condition and termination of the stream if the
byte cannot be delivered. Due to the strong sequential dependence on a stream,
streams must always be reliable and ordered. A Message Carrier may only be
morphed to a Stream if it uses transport protocol stack that provides
reliable, ordered service, and only before it is used to send a Message.A Message is an atomic unit of communication between applications. A Message
that cannot be delivered in its entirety within the constraints of the network
connectivity and the requirements of the application is not delivered at all.Messages can represent both relatively small structures, such as requests in a
request/response protocol such as HTTP; as well as relatively large
structures, such as files of arbitrary size in a filesystem.In the general case, there is no mapping between a Message and packets sent by
the underlying protocol stack on the wire: the transport protocol may freely
segment messages and/or combine messages into packets. However, a message may be
marked as immediate, which will cause it to be sent in a single packet, if it
will fit.This implies that both the sending and receiving endpoint, whether in the
application layer or the transport layer, must guarantee storage for the full
size of an Message.Messages are sent over and received from Message Carriers (see ).On sending, Messages have properties that allow the application to specify its
requirements with respect to reliability, ordering, priority, idempotence, and
immediacy; these are described in detail below. Messages may also have arbitrary
properties which provide additional information to the underlying transport
protocol stack on how they should be handled, in a protocol-specific way. These
stacks may also deliver or set properties on received messages, but in the
general case a received messages contains only a sequence of ordered bytes.A Message may have a “lifetime” – a wallclock duration before which the
Message must be available to the application layer at the remote end. If a
lifetime cannot be met, the Message is discarded as soon as possible. Messages
without lifetimes are sent reliably if supported by the transport protocol
stack. Lifetimes are also used to prioritize Message delivery.There is no guarantee that a Message will not be delivered after the end of
its lifetime; for example, a Message delivered over a strictly reliable
transport will be delivered regardless of its lifetime. Depending on the
transport protocol stack used to transmit the message, these lifetimes may
also be signaled to path elements by the underlying transport, so that path
elements that realize a lifetime cannot be met can discard frames containing
the Messages instead of forwarding them.Messages have a “niceness” – a priority among other messages sent over the
same Message Carrier in an unbounded hierarchy most naturally represented as a
non-negative integer. By default, Messages are in niceness class 0, or highest
priority. Niceness class 1 Messages will yield to niceness class 0 Messages
sent over the same Carrier, class 2 to class 1, and so on. Niceness may be
translated to a priority signal for exposure to path elements (e.g. DSCP
codepoint) to allow prioritization along the path as well as at the sender and
receiver. This inversion of normal schemes for expressing priority has a
convenient property: priority increases as both niceness and lifetime
decrease. A Message may have both a niceness and a lifetime – Messages with
higher niceness classes will yield to lower classes if resource constraints
mean only one can meet the lifetime.A Message may have “antecedents” – other Messages on which it
depends, which must be delivered before it (the “successor”) is delivered.
The sending transport uses deadlines, niceness, and antecedents, along with
information about the properties of the Paths available, to determine when to
send which Message down which Path.A sending application may mark a Message as “idempotent” to signal to the
underlying transport protocol stack that its application semantics make it
safe to send in situations that may cause it to be received more than once
(i.e., for 0-RTT session resumption as in TCP Fast Open, TLS 1.3, and QUIC).A sending application may mark a Message as “immediate” to signal to the
underlying transport protocol stack that its application semantics require it to
be placed in a single packet, on its own, instead of waiting to be combined with
other messages or parts thereof (i.e., for media transports and interactive
sessions with small messages).Senders may also be asynchronously notified of three events on Messages they
have sent: that the Message has been transmitted, that the Message has been
acknowledged by the receiver, or that the Message has expired before
transmission/acknowledgment. Not all transport protocol stacks will support
all of these events.An Association contains the long-term state necessary to support
communications between a Local (see ) and a Remote (see )
endpoint, such as cryptographic session resumption parameters or rendezvous
information; information about the policies constraining the selection of
transport protocols and local interfaces to create Transients (see
) to carry Messages; and information about the paths through the
network available available between them (see ).All Message Carriers are bound to an Association. New Message Carriers will
reuse an Association if they can be carried from the same Local to the same
Remote over the same Paths; this re-use of an Association may implies the
creation of a new Transient.A Remote represents information required to establish and maintain a
connection with the far end of an Association: name(s), address(es), and
transport protocol parameters that can be used to establish a Transient;
transport protocols to use; information about public keys or certificate
authorities used to identify the remote on connection establishment; and so
on. Each Association is associated with a single Remote, either explicitly by
the application (when created by the initiation of a Message Carrier) or a
Listener (when created by forking a Message Carrier on passive open).A Remote may be resolved, which results in zero or more Remotes with more
specific information. For example, an application may want to establish a
connection to a website identified by a URL https://www.example.com. This URL
would be wrapped in a Remote and passed to a call to initiate a Message
Carrier. The first pass resolution might parse the URL, decomposing it into a
name, a transport port, and a transport protocol to try connecting with. A
second pass resolution would then look up network-layer addresses associated
with that name through DNS, and store any certificates available from DANE.
Once a Remote has been resolved to the point that a transport protocol stack
can use it to create a Transient, it is considered fully resolved.A Local represents all the information about the local endpoint necessary to
establish an Association or a Listener: interface, port, and transport
protocol stack information, as well as certificates and associated private
keys to use to identify this endpoint.A Transient represents a binding between a Message Carrier and the instance of
the transport protocol stack that implements it. As an Association contains
long-term state for communications between two endpoints, a Transient contains
ephemeral state for a single transport protocol over a single Path at a given
point in time.A Message Carrier may be served by multiple Transients at once, e.g. when
implementing multipath communication such that the separate paths are exposed to
the API by the underlying transport protocol stack. Each Transient serves only
one Message Carrier, although multiple Transients may share the same underlying
protocol stack; e.g. when multiplexing Carriers over streams in a multistreaming
protocol.Transients are generally not exposed by the API to the application, though
they may be accessible for debugging and logging purposes.A Path represents information about a single path through the network used by an
Association, in terms of source and destination network and transport layer
addresses within an addressing context, and the provisioning domain
of the local interface. This information may be learned through a resolution,
discovery, or rendezvous process (e.g. DNS, ICE), by measurements taken by the
transport protocol stack, or by some other path information discovery mechanism.
It is used by the transport protocol stack to maintain and/or (re-)establish
communications for the Association.The set of available properties is a function of the transport protocol stacks
in use by an association. However, the following core properties are generally
useful for applications and transport layer protocols to choose among paths
for specific Messages:Maximum Transmission Unit (MTU): the maximum size of an Message’s payload
(subtracting transport, network, and link layer overhead) which will likely
fit into a single frame. Derived from signals sent by path elements, where
available, and/or path MTU discovery processes run by the transport layer.Latency Expectation: expected one-way delay along the Path. Generally
provided by inline measurements performed by the transport layer, as opposed
to signaled by path elements.Loss Probability Expectation: expected probability of a loss of any
given single frame along the Path. Generally provided by inline measurements
performed by the transport layer, as opposed to signaled by path elements.Available Data Rate Expectation: expected maximum data rate along the
Path. May be derived from passive measurements by the transport layer, or from
signals from path elements.Reserved Data Rate: Committed, reserved data rate for the given
Association along the Path. Requires a bandwidth reservation service in the
underlying transport protocol stack.Path Element Membership: Identifiers for some or all nodes along the
path, depending on the capabilities of the underlying network layer protocol
to provide this.Path properties are generally read-only. MTU is a property of the underlying
link-layer technology on each link in the path; latency, loss, and rate
expectations are dynamic properties of the network configuration and network
traffic conditions; path element membership is a function of network topology.
In an explicitly multipath architecture, application and transport layer
requirements can be met by having multiple paths with different properties to
select from. Transport protocol stacks can also provide signaling to devices
along the path, but this signaling is derived from information provided to the
Message abstraction.A Local and a Remote is not necessarily enough to establish a Message Carrier
between two endpoints. For instance, an application may require or prefer
certain transport features (see ) in the transport
protocol stacks used by the Transients underlying the Carrier; it may also
prefer Paths over one interface to those over another (e.g. WiFi access over LTE
when roaming on a foreign LTE network, due to cost). These policies are
expressed in a Policy Context bound to an Association. Multiple policy contexts
may be active at once; e.g. a system Policy Context expressing administrative
preferences about interface and protocol selection, an application Policy
Context expressing transport feature information. The expression of policy
contexts and the resolution of conflicts among Policy Contexts is currently
implementation-specific; note that these are equivalent to the Policy API in the
NEAT architeture .We now turn to the design of an abstract programming interface to provide a
simple interface to Post’s abstractions, constrained by the following design
principles:Flexibility is paramount. So is simplicity. Applications must be
given as many controls and as much information as they may need, but they must
be able to ignore controls and information irrelevant to their operation. This
implies that the “default” interface must be no more complicated than BSD
sockets, and must do something reasonable.Reception is an inherently asynchronous activity. While the API is
designed to be as platform-independent as possible, one key insight it is
based on is that an Message receiver’s behavior in a packet-switched network is
inherently asynchronous, driven by the receipt of packets, and that this
asynchronicity must be reflected in the API. The actual implementation of
receive and event handling will need to be aligned to the method a given
platform provides for asynchronous I/O.A new API cannot be bound to a single transport protocol and expect
wide deployment. As the API is transport-independent and may support runtime
transport selection, it must impose the minimum possible set of constraints on
its underlying transports, though some API features may require underlying
transport features to work optimally. It must be possible to implement Post
over vanilla TCP in the present Internet architecture.The API we design from these principles is centered around a Carrier, which
can be created actively via initiate() or passively via a listen(); the latter
creates a Listener from which new Carriers can be accept()ed. Messages may be
created explicitly and passed to this Carrier, or implicitly through a
simplified interface which uses default message properties (reliable transport
without priority or deadline, which guarantees ordered delivery over a single
Carrier when the underlying transport protocol stack supports it).The current state of API development is illustrated as a set of interfaces and
function prototypes in the Go programming language in ; future
revisions of this document will give more a more abstract specification of the
API as development completes.Here, we illustrate the usage of the API outlined in for common
connection patterns. Note that error handling is ignored in these
illustrations for ease of reading.Here’s an example client-server application. The server echoes messages. The
client sends a message and prints what it receives.The client in connects, sends a message, and sets up a receiver
to print messages received in response. The carrier is inactive after the
Initiate() call; the Send() call blocks until the carrier can be activated.The server in creates a Listener, which accepts Carriers and passes them to a server. The server echos the content of each message it receives.The Responder allows the server to be significantly simplified, as shown in .The fundamental design of a client need not change at all for happy eyeballs
(selection of multiple potential protocol stacks through connection
racing); this is handled by the Post Sockets implementation automatically. If
this connection racing is to use 0-RTT data (i.e., as provided by TCP Fast Open
, the client must mark the outgoing message as idempotent.In the client-server examples shown above, the Remote given to the Initiate call
refers to the name and port of the server to connect to. This need not be the
case, however; a Remote may also refer to an identity and a rendezvous point for
rendezvous as in ICE . Here, each peer does its own Initiate call
simultaneously, and the result on each side is a Carrier attached to an
appropriate Association.A multicast receiver is implemented using a Sink attached to a Local
encapsulating a multicast address on which to receive multicast datagrams. The
following example prints messages received on the multicast address forever.Here we discuss an incomplete list of API implementation considerations that
have arisen with experimentation with the prototype in .An obvious goal of Post Sockets is interoperability with non-Post Sockets
endpoints: a Post Sockets endpoint using a given protocol stack must be able to
communicate with another endpoint using the same protocol stack, but not using
Post Sockets. This implies that the underlying transport protocol stack must
support object framing, in order to delimit Messages carried by protocol stacks
that are not themselves message-oriented.Another goal of Post Sockets is to work over unmodified TCP. We could simply
define a Message Carrier over TCP to support only stream morphing, but this
would fall far short of our goal to transport independence. Another approach is
to recognize that almost every protocol using TCP already has its own message
delimiters, and to allow the receiver of a Message to provide a deframing
primitive to the API. Experimentation with the best way to achieve this within
Post Sockets is underway.Ideally, Messages can be of infinite size. However, protocol stacks and protocol
stack implementations may impose their own limits on message sizing; For
example, SCTP and TLS impose record size
limitations of 64kB and 16kB, respectively. Message sizes may also be limited by
the available buffer at the receiver, since a Message must be fully assembled by
the transport layer before it can be passed on to the application layer. Since
not every transport protocol stack implements the signaling necessary to
negotiate or expose message size limitations, these are currently configured out
of band, and are probably best exposed through the policy context.A truly infinite message service – e.g. large file transfer where both
endpoints have committed persistent storage to the message – is probably best
realized as a layer above Post Sockets, and may be added as a new type of
Message Carrier to a future revision of this document.Regardless of how asynchronous reception is implemented, it is important for an
application to be able to apply receiver backpressure, to allow the protocol
stack to perform receiver flow control. Depending on how asynchronous I/O works
in the platform, this could be implemented by having a maximum number of
concurrent receive callbacks, for example.Many thanks to Laurent Chuat and Jason Lee at the Network Security Group at ETH
Zurich for contributions to the initial design of Post Sockets. Thanks to Joe
Hildebrand, Martin Thomson, and Michael Welzl for their feedback, as well as the
attendees of the Post Sockets workshop in February 2017 in Zurich for the
discussions, which have improved the design described herein.This work is partially supported by the European Commission under Horizon 2020
grant agreement no. 688421 Measurement and Architecture for a Middleboxed
Internet (MAMI), and by the Swiss State Secretariat for Education, Research, and
Innovation under contract no. 15.0268. This support does not imply endorsement.Services provided by IETF transport protocols and congestion control mechanismsThis document describes, surveys, and classifies the protocol mechanisms provided by existing IETF protocols, as background for determining a common set of transport services. It examines the Transmission Control Protocol (TCP), Multipath TCP, the Stream Control Transmission Protocol (SCTP), the User Datagram Protocol (UDP), UDP-Lite, the Datagram Congestion Control Protocol (DCCP), the Internet Control Message Protocol (ICMP), the Realtime Transport Protocol (RTP), File Delivery over Unidirectional Transport/ Asynchronous Layered Coding Reliable Multicast (FLUTE/ALC), and NACK- Oriented Reliable Multicast (NORM), Transport Layer Security (TLS), Datagram TLS (DTLS), and the Hypertext Transport Protocol (HTTP), when HTTP is used as a pseudotransport. This survey provides background for the definition of transport services within the TAPS working group.Transmission Control ProtocolStream Control Transmission ProtocolThis document obsoletes RFC 2960 and RFC 3309. It describes the Stream Control Transmission Protocol (SCTP). SCTP is designed to transport Public Switched Telephone Network (PSTN) signaling messages over IP networks, but is capable of broader applications.SCTP is a reliable transport protocol operating on top of a connectionless packet network such as IP. It offers the following services to its users:-- acknowledged error-free non-duplicated transfer of user data,-- data fragmentation to conform to discovered path MTU size,-- sequenced delivery of user messages within multiple streams, with an option for order-of-arrival delivery of individual user messages,-- optional bundling of multiple user messages into a single SCTP packet, and-- network-level fault tolerance through supporting of multi-homing at either or both ends of an association. The design of SCTP includes appropriate congestion avoidance behavior and resistance to flooding and masquerade attacks. [STANDARDS-TRACK]Interactive Connectivity Establishment (ICE): A Protocol for Network Address Translator (NAT) Traversal for Offer/Answer ProtocolsThis document describes a protocol for Network Address Translator (NAT) traversal for UDP-based multimedia sessions established with the offer/answer model. This protocol is called Interactive Connectivity Establishment (ICE). ICE makes use of the Session Traversal Utilities for NAT (STUN) protocol and its extension, Traversal Using Relay NAT (TURN). ICE can be used by any protocol utilizing the offer/answer model, such as the Session Initiation Protocol (SIP). [STANDARDS-TRACK]Happy Eyeballs: Success with Dual-Stack HostsWhen a server's IPv4 path and protocol are working, but the server's IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an IPv4-only client. This is undesirable because it causes the dual- stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm. [STANDARDS-TRACK]TCP Extensions for Multipath Operation with Multiple AddressesTCP/IP communication is currently restricted to a single path per connection, yet multiple paths often exist between peers. The simultaneous use of these multiple paths for a TCP/IP session would improve resource usage within the network and, thus, improve user experience through higher throughput and improved resilience to network failure.Multipath TCP provides the ability to simultaneously use multiple paths between peers. This document presents a set of extensions to traditional TCP to support multipath operation. The protocol offers the same type of service to applications as TCP (i.e., reliable bytestream), and it provides the components necessary to establish and use multiple TCP flows across potentially disjoint paths. This document defines an Experimental Protocol for the Internet community.Pervasive Monitoring Is an AttackPervasive monitoring is a technical attack that should be mitigated in the design of IETF protocols, where possible.TCP Fast OpenThis document describes an experimental TCP mechanism called TCP Fast Open (TFO). TFO allows data to be carried in the SYN and SYN-ACK packets and consumed by the receiving end during the initial connection handshake, and saves up to one full round-trip time (RTT) compared to the standard TCP, which requires a three-way handshake (3WHS) to complete before data can be exchanged. However, TFO deviates from the standard TCP semantics, since the data in the SYN could be replayed to an application in some rare circumstances. Applications should not use TFO unless they can tolerate this issue, as detailed in the Applicability section.Multiple Provisioning Domain ArchitectureThis document is a product of the work of the Multiple Interfaces Architecture Design team. It outlines a solution framework for some of the issues experienced by nodes that can be attached to multiple networks simultaneously. The framework defines the concept of a Provisioning Domain (PvD), which is a consistent set of network configuration information. PvD-aware nodes learn PvD-specific information from the networks they are attached to and/or other sources. PvDs are used to enable separation and configuration consistency in the presence of multiple concurrent connections.QUIC: A UDP-Based Multiplexed and Secure TransportQUIC is a multiplexed and secure transport protocol that runs on top of UDP. QUIC builds on past transport experience, and implements mechanisms that make it useful as a modern general-purpose transport protocol. Using UDP as the basis of QUIC is intended to address compatibility issues with legacy clients and middleboxes. QUIC authenticates all of its headers, preventing third parties from changing them. QUIC encrypts most of its headers, thereby limiting protocol evolution to QUIC endpoints only. Therefore, middleboxes, in large part, are not required to be updated as new protocol versions are deployed. This document describes the core QUIC protocol, including the conceptual design, wire format, and mechanisms of the QUIC protocol for connection establishment, stream multiplexing, stream and connection-level flow control, and data reliability. Accompanying documents describe QUIC's loss recovery and congestion control, and the use of TLS 1.3 for key negotiation.The Transport Layer Security (TLS) Protocol Version 1.3This document specifies version 1.3 of the Transport Layer Security (TLS) protocol. TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery.Minion - Wire ProtocolMinion uses TCP-format packets on-the-wire, for compatibility with existing NATs, Firewalls, and similar middleboxes, but provides a richer set of facilities to the application, as described in the Minion Service Model document. This document specifies the details of the on-the-wire protocol used to provide those services.Abstract Mechanisms for a Cooperative Path Layer under Endpoint Controldraft-trammell-plus-abstract-mech-00 Abstract This document describes the operation of three abstract mechanisms for supporting an explicitly cooperative path layer in the Internet architecture. Three mechanisms are described: sender to path signaling with receiver integrity verification; path to receiver signaling with confidential feedback to sender; and direct path to sender signaling.Transport-Independent Path Layer State ManagementThis document describes a simple state machine for stateful network devices on a path between two endpoints to associate state with traffic traversing them on a per-flow basis, as well as abstract signaling mechanisms for driving the state machine. This state machine is intended to replace the de-facto use of the TCP state machine or incomplete forms thereof by stateful network devices in a transport-independent way, while still allowing for fast state timeout of non-established or undesirable flows.MinimaLT, Minimal-latency Networking Through Better SecurityTowards a Flexible Internet Transport Layer ArchitectureThe following sketch is a snapshot of an API currently under development in Go,
available at https://github.com/mami-project/postsocket. The details of the API
are still under development; once the API definition stabilizes, this will be
expanded into prose in a future revision of this draft.