Patent classifications
G06F15/17
Network computer with two embedded rings
A computer comprising a plurality of interconnected processing nodes arranged in a configuration in which multiple layers of interconnected nodes are arranged along an axis, each layer comprising at least four processing nodes connected in a non-axial ring by at least respective intralayer link between each pair of neighbouring processing nodes, wherein each of the at least four processing nodes in each layer is connected to a respective corresponding node in one or more adjacent layer by a respective interlayer link, the computer being programmed to provide in the configuration two embedded one dimensional paths and to transmit data around each of the two embedded one dimensional paths, each embedded one dimensional path using all processing nodes of the computer in such a manner that the two embedded one dimensional paths operate simultaneously without sharing links.
High-performance input-output devices supporting scalable virtualization
Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.
MULTI-MULTIDIMENSIONAL COMPUTER ARCHITECTURE FOR BIG DATA APPLICATIONS
A data processing apparatus is provided comprising a front-end interface electronically coupled to a main processor. The front-end interface is configured to receive data stored in a repository, in particular an external storage and/or a network, determine whether the data is a single-access data or a multiple-access data by analyzing an access parameter designating the data, route the multiple-access data for processing by the main processor, and route the single-access data for pre-processing by the front-end interface and routing results of the pre-processing to the main processor.
Methods and systems for data interchange between a network-connected thermostat and cloud-based management server
A thermostat may include one or more temperature sensors, a processor configured to operate in a sleep mode and a wake mode, and a Wi-Fi chip that wirelessly communicates with a thermostat management server. The Wi-Fi chip may be configured to receive data packets from the thermostat management server while the processor operates in the sleep mode, and determine a priority level of the received data packets. The priority level may include a standard priority level and a keep-alive priority level. The Wi-Fi chip may also be configured to filter the received data packets based on the determined priority level of each packet such that the keep-alive priority level packets are discarded, and forward the standard priority level packets to the processor.
Methods and systems for data interchange between a network-connected thermostat and cloud-based management server
A thermostat may include one or more temperature sensors, a processor configured to operate in a sleep mode and a wake mode, and a Wi-Fi chip that wirelessly communicates with a thermostat management server. The Wi-Fi chip may be configured to receive data packets from the thermostat management server while the processor operates in the sleep mode, and determine a priority level of the received data packets. The priority level may include a standard priority level and a keep-alive priority level. The Wi-Fi chip may also be configured to filter the received data packets based on the determined priority level of each packet such that the keep-alive priority level packets are discarded, and forward the standard priority level packets to the processor.
Method and apparatus to facilitate low latency fault mitigation, QoS management and debug of a processing pipeline
Methods, apparatus, systems and articles of manufacture for an example event processor are disclosed to retrieve an input event and an input event timestamp corresponding to the input event, generate an output event based on the input event and the input event timestamp, in response to determination that an input event threshold is exceeded within a threshold of time, and an anomaly detector to retrieve the output event, determine whether the output event indicates threat to functional safety of a system on a chip, and in response to determining the output event indicates threat to functional safety of the system on a chip, adapt a process for the system on a chip to preserve functional safety.
PROCESSING HIGH VOLUME NETWORK DATA
Disclosed are a system comprising a computer-readable storage medium storing at least one program, and a computer-implemented method for event messaging over a network. A subscription interface receives data indicative of a subscription request for sessionized data. An allocation module allocates a sessionizer bank linked to the subscription request. A messaging interface module provisions identifiers linked to the respective processing engines of the sessionizer bank. The messaging interface module registers the allocated sessionizer bank as available to process event messages matching the subscription request by providing the provisioned identifiers. The messaging interface module receives event messages from a producer device linked by a collection server to a selected one of the processing engines of the sessionizer bank. The selected one of the processing engine processes the received event messages in accordance with session rule data linked to the subscription request to generate sessionized data.
LOCALITY-AWARE SCHEDULING FOR NIC TEAMING
Some embodiments provide a method for distributing packets processed at multiple sockets across a team of network interface controllers (NICs) in a processing system. The method of some embodiments uses existing distribution (or selection) algorithms for distributing traffic across NICs of a NIC team (across several sockets), but augments the method to prioritize local NICs over remote NICs. When active NICs local to a socket associated with a packet are available, the method of some embodiments uses the selection algorithm to select from an array of the active local NICs. When active NICs local to the socket are not available, the method of some embodiments uses the selection algorithm to select from an array of the active NICs of other NICs on the NIC team.
Circuit Architecture Mapping Signals to Functions for State Machine Execution
An integrated circuit includes a memory configured to store a plurality of functions; a mapping interface configured to perform a mapping from a received first signal to a first function of the plurality of functions; and a state machine configured to, in response to said mapping, execute the first function; wherein the integrated circuit is arranged to, in dependence on the execution of the first function at the state machine, modify said mapping between the first signal and the first function so as to re-map the first signal to a second function of the plurality of functions such that, on receiving a subsequent first signal, the state machine is configured to execute the second function.
Network system, network node and communication method
Network system being configured to execute I/O commands and application commands in parallel and comprising a network and at least one network node, wherein the at least one network node is connected to the network via a network adapter and is configured to run several processes and/or threads in parallel, wherein the at least one network node comprises or is configured to establish a common communication channel (C-channel) to be used by the several processes and/or threads for data communication with the network via the network adapter, wherein the C-channel comprises or is established to comprise a work queue (WQ) for execution of I/O commands and a completion queue (CQ) for indication of a status of I/O commands, and wherein the at least one network node, especially its comprised or to be established C-channel, is configured for an exclusive access of precisely one single process or thread out of the several processes and/or threads to the CQ of the C-channel at a particular time.