Patent classifications
G06F16/58
Automated event detection and photo product creation
A computer-implemented method for automatically detecting events and creating photo-product designs based on the events in a photo-product design system includes automatically identifying an event by an event detection module based on daily numbers of captured photos over a plurality of days, automatically selecting a photo-product type by an intelligent product design creation engine in the photo-product design system, calculating a daily weight for a photo product design in the photo-product type based on the daily numbers of captured photos, automatically determining a number of product photos allocated to each day based on associated daily weight, automatically selecting product photos from the captured photos each day at the event according to the number of product photos allocated to each day, and automatically creating a photo-product design for the event using the selected product photos.
PROJECTION DEVICE
A projection device includes a projection module and a first camera module. The projection device has a first optical axis and configured to form a projection area, wherein a projection of the first optical axis on an X-Z plane of the projection device is perpendicular to an X-Y plane on which the projection area is formed. The first camera module is disposed on a side of the projection module and includes a second optical axis, wherein the first camera module is configured to form a first shooting area, the second optical axis forms a first angle Δθ1 with respect to the first optical axis, the projection area at least partially overlaps the first shooting area to form an overlapping area, and the first angle Δθ1 is a function of a distance between the projection module and the first camera module.
Artificial intelligence-based quality scoring
The technology disclosed assigns quality scores to bases called by a neural network-based base caller by (i) quantizing classification scores of predicted base calls produced by the neural network-based base caller in response to processing training data during training, (ii) selecting a set of quantized classification scores, (iii) for each quantized classification score in the set, determining a base calling error rate by comparing its predicted base calls to corresponding ground truth base calls, (iv) determining a fit between the quantized classification scores and their base calling error rates, and (v) correlating the quality scores to the quantized classification scores based on the fit.
Visual representation coherence preservation
A method, a computer program product, and a computer system determine and arrange images to include in a visual representation. The method includes receiving a textual statement and identifying a plurality of terms in the textual statement that are to be visualized in the visual representation. The method includes generating a plurality of sequences of images where each image in a given one of the sequences is associated with one of the terms. Each image is associated with at least one tag. The method includes determining a global coherence and a local coherence for each of the sequences based on the tags of the images. The method includes selecting one of the sequences based on the global coherence and the local coherence. The method includes generating the visual representation where the images of the selected sequence are included.
Product presentation assisted by visual search
Example embodiments may provide a system, apparatus, computer readable media, and/or method configured for processing input representing data associated with a first product, the first product comprising a plurality of components, processing input representing a particular one of the components, processing input representing an attribute of the particular component or of the first product, querying a product memory based on the particular component and the attribute to identify a second product.
NETWORK SETUP FOR DIGITAL PICTURE FRAMES
A picture frame and methods of setup, gifting, and/or use. Network connection allows digital frames to be set up remotely by a first user for a second user. The first user can upload photos from electronic devices or from photo collections of community members before the second user receives the frame device. The frame is thus ready for display upon powering on by the second user. An integrated camera is used to automatically determine an identity of a frame viewer and can capture gesture-based feedback. The displayed photos are automatically shown and/or changed according to the detected viewers. The photos can be filtered and cropped at the receiver side. Clustering photos by content is used to improve display and to respond to photo viewer desires.
Automatically generating panorama tours
In one aspect, a request to generate an automated tour based on a set of panoramic images is received. Each particular panoramic image is associated with geographic location information and linking information linking the particular panoramic image with one or more other panoramic images in the set. A starting panoramic image and a second panoramic image are determined based at least in part on the starting panoramic image and the linking information associated with the starting and second panoramic images. A first transition between the starting panoramic image and the second panoramic image is also determined based at least in part on the linking information for these panoramic images. Additional panoramic images as well as a second transition for between the additional panoramic images are also determined. The determined panoramic images and transitions are added to the tour according to an order of the tour.
METHODS AND APPARATUSES FOR CORNER DETECTION
An apparatus configured for head-worn by a user, includes: a screen configured to present graphics for the user; a camera system configured to view an environment in which the user is located; and a processing unit coupled to the camera system, the processing unit configured to: obtain a first image with a first resolution, the first image having a first corner, determine a second image with a second resolution, the second image having a second corner that corresponds with the first corner in the first image, wherein the second image is based on the first image, the second resolution being less than the first resolution, detect the second corner in the second image, determine a position of the second corner in the second image, and determine a position of the first corner in the first image based at least in part on the determined position of the second corner in the second image.
METHOD AND APPARATUS FOR OBTAINING GEOGRAPHICAL LOCATION INFORMATION, AND ELECTRONIC TERMINAL
The present invention relates to the field of electronic device applications, and discloses a method and an apparatus for obtaining geographical location information, and an electronic terminal, which are used to resolve a problem that geographical location information of a file cannot be obtained when a GPS positioning function and a network positioning function cannot be normally used. An embodiment provided by the present invention includes: obtaining attribute information of a reference object; when the attribute information of the reference object includes geographical location information, determining at least one reference object with a minimum difference value between a generation moment of the reference object and a generation moment of a to-be-operated object, and using geographical location information of the determined at least one reference object as geographical location information of the to-be-operated object. This embodiment of the present application is mainly applied to a procedure of obtaining geographical location information.
IMAGE TRANSMISSION METHOD AND APPARATUS
The present disclosure relates to an image transmission method and apparatus. In one embodiment, the method includes acquiring an image; automatically extracting personal characteristic information from the image; and automatically sending the image to a terminal device associated with the personal characteristic information. The image is thus automatically transmitted to a recipient having personal characteristics matching that detected from the image without user intervention.