Patent classifications
G06F11/16
ENCODING SLICE VERIFICATION INFORMATION TO SUPPORT VERIFIABLE REBUILDING
A method includes storing, by a set of storage units, a set of appended encoded data slices, where an appended encoded data slice of the set of appended encoded data slices includes an encoded data slice of a set of encoded data slices and slice verification information. The method further includes identifying, by a rebuilding agent, one of the set of appended encoded data slices for rebuilding, rebuilding the encoded data slice, generating current slice verification information, and sending an appended rebuilt encoded data slice that includes the rebuilt encoded data slice and the current slice verification information to a storage unit. The method further includes verifying, by the storage unit, the current slice verification information corresponds to the slice verification information, and when the current slice verification information corresponds to the slice verification information, storing the appended rebuilt encoded data slice as a trusted rebuilt encoded data slice.
Node down recovery method and apparatus, electronic device, and storage medium
A method and apparatus for recovery from node crash, an electronic device, and a storage medium are provided. The method is applicable to a proxy server in a master-slave system. The master-slave system further includes a target master node controlled by the proxy server and a target slave node corresponding to the target master node. If the target master node and the target slave node crash, the proxy server obtains a pre-stored persistent file from the target slave node. The target slave node stores a backup of cache data cached in the target master node, and the persistent file is generated based on cache data in the target slave node. A target master node that does not crash is deployed based on the persistent file; and a target slave node corresponding to the target master node that does not crash is deployed. In this solution, after the target master node and the target slave node both crash, the master-slave system can be recovered to a normal working state.
Determining processor offsets to synchronize processor time values
Provided are a computer program product, system, and method for determining processor offsets to synchronize processor time values. A determination is made of a master processor offset from one of a plurality of time values of the master processor and a time value of one of the slave processors. A determination is made of slave processor offsets, wherein each slave processor offset is determined from the master processor offset, one of the time values of the master processor, and a time value of the slave processor. A current time value of the master processor is adjusted by the master processor offset. A current time value of each of the slave processors is adjusted by the slave processor offset for the slave processor whose time value is being adjusted.
EVALUATION FOR REBUILDING PERFORMANCE OF REDUNDANT ARRAYS OF INDEPENDENT DISKS
Embodiments of the present disclosure provide a solution of evaluating a rebuilding performance of a redundant array of independent disks. In some embodiments, there is provided a computer-implemented method, comprising: simulating, based on a first group of redundant arrays of independent disks, a rebuilding process for a second group of redundant arrays of independent disks; obtaining a first performance metric of the simulated rebuilding process; and identifying a factor associated with the rebuilding performance of the second group of redundant arrays of independent disks based on the first performance metric.
Microcontroller utilizing redundant address decoders and electronic control device using the same
The present invention provides a microcontroller which can continue operation even at the time of a failure without making a memory redundant to suppress increase in chip area. The microcontroller includes three or more processors executing the same process in parallel and a storage device. The storage device includes a memory mat having a storage region which is not redundant, an address selection part, a data output part, and a failure recovery part. The address selection part selects a storage region in the memory mat on the basis of three or more addresses issued at the time of an access by the processors. The data output part reads data from the storage region in the memory mat selected by the address selection part. The failure recovery part corrects or masks a failure of predetermined number or less which occurs in the memory mat, the address selection part, and the data output part.
Systems and methods for providing continuing access to a remote computer program
Systems and methods are provided for using a file-sharing service to identify, execute, and provide continuing access to remote computer programs. In certain embodiments, a list of files to be accessed remotely is provided to a first device, a selection is received from a user at the first device identifying a file from the provided list, and an application is executed on a second device to access a copy of the identified file, which is synchronized with a file-sharing service.
Extending a database recovery point at a disaster recovery site
A DBA may pre-generate database recovery jobs on a convenient schedule at a local site, then recover a database at a disaster recovery site. Archive log files for the database that are generated in the interim between recovery job generation and recovery job execution are automatically incorporated into the recovery job when it executes, extending the recovery point closer to the time of the disruption that triggered the need or desire for recovery.
Preventing read disturbance accumulation in a cache memory
A method for preventing read disturbance accumulation in a cache memory. The method includes accessing a plurality of data lines in a cache set, generating a plurality of corrected data from a plurality of initial data based on a plurality of error correction codes (ECCs), and selecting a respective corrected data of the plurality of corrected data based on a respective way of a plurality of ways. Each of the plurality of data lines includes a respective data field of a plurality of data fields and a respective ECC field of a plurality of ECC fields. The plurality of initial data are stored in the plurality of data fields and the plurality of ECCs are stored in the plurality of ECC fields. Each of the plurality of ways is associated with a respective data line of the plurality of data lines.
Failover methods and system in a networked storage environment
Failover methods and systems for a storage environment are provided. During a takeover operation to take over storage of a first storage system node by a second storage system node, the second storage system node copies information from a first storage location to a second storage location. The first storage location points to an active file system of the first storage system node, and the second storage location is assigned to the second storage system node for the takeover operation. The second storage system node quarantines storage space likely to be used by the first storage system node for a write operation, while the second storage system node attempts to take over the storage of the first storage system node. The second storage system node utilizes information stored at the second storage location during the takeover operation to give back control of the storage to the first storage system node.
Electronic device and firmware recovery program that ensure recovery of firmware
An electronic device includes a first nonvolatile memory, a second nonvolatile memory, and a control circuit. The first nonvolatile memory includes an area to store firmware. The firmware includes a first kernel. The second nonvolatile memory includes an area to store an update program, the update program including a second kernel. The control circuit boots the one of the first and the second kernels, and ensures writing data to the first nonvolatile memory by the booted one of the first and the second kernels. When the firmware is incapable of being read, the control circuit reads the update program and performs the boot process to boot the second kernel, and writes updating data of the firmware to the first nonvolatile memory, the first nonvolatile memory being writable of the data by the booted second kernel.