patents.google.com

US20140287779A1 - System, method and device for providing personalized mobile experiences at multiple locations - Google Patents

  • ️Thu Sep 25 2014
System, method and device for providing personalized mobile experiences at multiple locations Download PDF

Info

Publication number
US20140287779A1
US20140287779A1 US14/219,901 US201414219901A US2014287779A1 US 20140287779 A1 US20140287779 A1 US 20140287779A1 US 201414219901 A US201414219901 A US 201414219901A US 2014287779 A1 US2014287779 A1 US 2014287779A1 Authority
US
United States
Prior art keywords
digital
user
location
story
mobile device
Prior art date
2013-03-22
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/219,901
Inventor
Brian Joseph O'Keefe
Kenneth Alan Parulski
Leslie Gerald Moore, JR.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tourblend Innovations LLC
Original Assignee
aDesignedPath for UsabilitySolutions LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2013-03-22
Filing date
2014-03-19
Publication date
2014-09-25
2014-03-19 Application filed by aDesignedPath for UsabilitySolutions LLC filed Critical aDesignedPath for UsabilitySolutions LLC
2014-03-19 Priority to US14/219,901 priority Critical patent/US20140287779A1/en
2014-03-20 Assigned to aDesignedPath for UsabilitySolutions, LLC reassignment aDesignedPath for UsabilitySolutions, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOORE, LESLIE GERALD, JR., O'KEEFE, BRIAN JOSEPH, PARULSKI, KENNETH ALAN
2014-09-25 Publication of US20140287779A1 publication Critical patent/US20140287779A1/en
2016-05-10 Priority to US15/151,448 priority patent/US10136260B2/en
2019-03-23 Assigned to TOURBLEND INNOVATIONS, LLC reassignment TOURBLEND INNOVATIONS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: aDesignedPath for UsabilitySolutions, LLC
Status Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • Some embodiments of the present invention relate to personalized travel experiences.
  • some embodiments of the present invention relate to mobile devices and systems, as well as related methods, for providing digital story experiences related to a common theme at different geographic locations.
  • Smart phones, tablet computers, and other portable devices incorporating wireless connections to the Internet have opened up opportunities for new, entertaining tourism experiences. These devices are currently used to provide location-aware travel guides to various cities and historical sites. For example, various smart phone apps provide a guide to restaurants, bars, and nightlife in cities such as Boston and New York. Some of these apps use the smart phone's built-in GPS to provide various maps and lists of venues in order of distance from the user's current location.
  • Fodor'STM City Apps provides iPhoneTM and AndroidTM apps for a number of major cities, including New York City.
  • the Fodor's apps provide recommendations for sightseeing, restaurants and hotels.
  • Each Fodor's app permits the user to book hotels, restaurants, and entertainment in the particular city, using ExpediaTM, OpenTableTM, and TicketsNowTM. It also permits the user to bookmark and create comments about their favorite attractions.
  • the user can download an interactive offline map and reviews, so that the user can browse the map, read reviews, and make notes when in the subway or other areas with poor wireless reception.
  • Photography is often used to record and share experiences, such as vacation trips, family outings, or seasonal events. Still and video images of such experiences can be captured using image capture devices including camera phones (such as smart phones), digital still cameras, and camcorders.
  • image capture devices including camera phones (such as smart phones), digital still cameras, and camcorders.
  • the digital images captured by these image capture devices can be shared by e-mail and uploaded to web sites such as FacebookTM and FlickrTM, where they can be viewed by friends.
  • the uploaded images can be printed using on-line photo service providers, such as ShutterflyTM. Users can order photo products, such as photo books and collages, which utilize uploaded digital images.
  • a geofence it is known to use a “geofence” to create a virtual perimeter for a real-world geographic area, such as a boundary around a store, school, or other area of interest.
  • the location-aware device such as a smart phone
  • LBS location-based service
  • the device can generate a notification.
  • the notification can be sent to an email account or another smart phone.
  • a parent can be notified when a child leaves an area defined by a geofence.
  • augmented reality it is known to utilize augmented reality in apps running on smart phones.
  • the AurasmaTM augmented reality platform developed by Hewlett Packard (“HP”)TM, Palo Alto, Calif. can enable a smart phone to recognize real world images.
  • the real world images can be overlaid with animations, videos, and 3D models to provide augmented reality experiences.
  • Locast Another known prior art system is “Locast”, developed by the MIT media Lab. According to their website, Locast can be used to create interactive narratives that are crafted by linking together videos and photos thematically, geographically, and chronologically. These stories can be explored by viewers in a non-linear fashion.
  • This MIT group has developed the Open Locast Web Application, which includes a map-based front-end built upon OpenLayers and the GoogleTM Maps API, that provides an interface for browsing, searching, and interacting with media content.
  • This group has also developed the Open Locast Android Application, which provides interactive content recording/creation, browsing and searching. It supports content synchronization for offline content capturing, viewing and browsing, allowing for use in locations with limited or no connectivity.
  • a method executed by a data processing device system includes the steps of: storing, in a processor-accessible memory device system communicatively connected to the data processing device system, a user profile associated with a user; storing, in the processor-accessible memory device system, data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations; determining whether or not a current location of a mobile device associated with the user corresponds to one of the plurality of locations related to the plurality of location-specific digital stories; determining, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, and based at least on an analysis of the user profile, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories at a different one of the plurality of locations different
  • the first digital story may introduce the common theme, and the second digital story may continue the common theme, which was introduced at the different one of the plurality of locations.
  • User profiles for a plurality of users, and the data, may be stored by a network-accessible storage system.
  • the method may include the step of providing general content to the mobile device if it is determined that the current location of the mobile device does not correspond to one of the plurality of locations related to the plurality of location-specific digital stories.
  • the method may include determining a travel direction of the mobile device and providing the first digital story or the second digital story in response to the determined travel direction.
  • the user profile may include at least one user preference, and the method may include selecting the first digital story from a plurality of stored first digital stories in response to the stored user preference.
  • the plurality of stored first digital stories may be related to the first one of the plurality of locations.
  • the user preference may include a language preference, and the method may include accessing the stored user profile to determine the language preference and selecting the first digital story from the plurality of stored first digital stories in response to the determining of the language preference.
  • the user preference includes a demographic group of the user, and the method includes accessing the stored user profile to determine the language preference and selecting the first digital story from the plurality of stored first digital stories in response to the determining of the demographic group.
  • the first digital story, the second digital story, or both is or are configured to instruct the user to capture one or more digital images
  • the method includes requesting a photo product related to the common theme.
  • the photo product may be defined to incorporate at least one digital image captured by the user.
  • the method includes suggesting a location for a next digital story in response to answers provided by the user during the first digital story or the second digital story.
  • a mobile device may include a memory device system story content data; an output device system; a location determination unit configured to determine a geographic location of the mobile device; and a data processing device system communicatively connected to the output device system, the memory device system, and the location determination unit.
  • the memory device system may store program instructions configured to cause the data processing system at least to: store, in the memory device system, data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations; determine whether or not a current location of the mobile device, which is provided by the location determination unit, corresponds to one of the plurality of locations related to the plurality of location-specific digital stories; determine, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories at a different one of the plurality of locations different than the first one; acquire, from the memory device system, first digital story content data of the digital story content data and provide the first digital story content data to the output device system in response to it being determined that the first case exists; and acquire, from the memory device system, second digital story content data of the digital story content data and provide the second digital story
  • the output device system may include an image display, a speaker, an audio output jack, or a combination thereof.
  • the first digital story content data, the second digital story content data, or both may include audio content data, and the program instructions may be configured to cause the data processing device system at least to provide music content data to the output device system when the current location of the mobile device does not correspond to one of the plurality of locations.
  • the first digital story content data, the second digital story content data, or both may be configured to instruct the user to capture one or more digital images using the mobile device.
  • At least some of the plurality of location-specific digital stories are associated with particular travel directions, and the program instructions are configured to cause the data processing device system at least to: determine, based at least on input from the location determination unit, a travel direction of the mobile device; and provide the first digital story content data or the second digital content data in response to the determined travel direction.
  • a system includes a memory device system storing a user profile associated with a user of a mobile device; a network-accessible storage device system storing data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations; a location determination unit configured to determine a geographic location of the mobile device; and a data processing device system.
  • the data processing device system may be configured at least to: determine whether or not a current location of the mobile device, which is provided by the location determination unit, corresponds to one of the plurality of locations related to the plurality of location-specific digital stories; determine, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, and based at least on an analysis of the user profile, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories by the mobile device at a different one of the plurality of locations different than the first one; provide a first digital story of the plurality of location-specific digital stories stored by the network-accessible storage device system to the mobile device in response to it being determined that the first case exists; and provide a second digital story of the plurality of location-specific digital stories stored by the network-accessible storage device system to the mobile device in response to it being determined that the second case exists.
  • the data processing device system is configured to automatically determine whether or not to provide the first digital story using audio data, based on measurements performed by the mobile device.
  • the memory device system which stores the user profile, may also store user profiles for a plurality of users and may be at least part of the network-accessible storage device system.
  • the first digital story, the second digital story, or both may include an augmented reality image of a historical character, and (b) may be configured to cause the mobile device to display the augmented reality image of the historical character along with an image captured by the mobile device.
  • At least some of the plurality of location-specific digital stories are associated with particular travel directions, and the data processing device system is configured at least to: determine a travel direction of the mobile device; and provide the first digital story or the second digital story in response to the determined travel direction.
  • a computer program product may be provided that comprises program code portions for performing some or all of any of the methods and associated features thereof described herein, when the computer program product is executed by a computer or other computing device or device system.
  • Such a computer program product may be stored on one or more non-transitory computer-readable storage mediums.
  • each of any or all of the computer-readable data storage medium systems described herein is a non-transitory computer-readable data storage medium system including one or more non-transitory computer-readable storage mediums storing one or more programs or program products which configure a data processing device system to execute some or all of one or more of the methods described herein.
  • any or all of the methods and associated features thereof discussed herein may be implemented as all or part of a device system or apparatus.
  • FIG. 1 illustrates a system configured to generate personalized travel experiences, according to some embodiments of the present invention
  • FIG. 2 is a block diagram of a particular implementation of the system of FIG. 1 in accordance with some embodiments of the present invention
  • FIG. 3 is a block diagram of a smart phone, which may be all or part of the system of FIG. 1 , according to some embodiments of the present invention
  • FIG. 4 is a flow diagram depicting steps for providing location-specific digital stories related to a common theme at a plurality of different locations, according to some embodiments of the present invention
  • FIG. 5 is a flow diagram depicting a particular implementation of step 425 in FIG. 4 pertaining to selecting a location specific story, according to some embodiments of the present invention
  • FIG. 6 is an example of a map depicting different locations at which location-specific digital story experiences related to a common theme can be provided, according to some embodiments of the present invention.
  • FIG. 7A depicts an example of a user interface screen for selecting a theme for location-specific stories, according to some embodiments of the present invention
  • FIG. 7B depicts a user interface screen which begins to introduce a theme and character of a story, according to some embodiments of the present invention
  • FIG. 8A depicts a user interface screen for presenting a first location-specific digital story associated with a first theme at a first geographic location, according to some embodiments of the present invention
  • FIG. 8B depicts a user interface screen for presenting a second location-specific digital story associated with the first theme at the first geographic location, according to some embodiments of the present invention
  • FIG. 9 depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a second geographic location, according to some embodiments of the present invention.
  • FIG. 10 depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a third geographic location, according to some embodiments of the present invention
  • FIG. 11 depicts a photo postcard product which includes a user-captured image and pre-stored information, according to some embodiments of the present invention
  • FIG. 12 is a flow diagram depicting steps for providing travel direction-dependent digital stories at a plurality of different locations, according to some embodiments of the present invention.
  • FIG. 13 is an example of a map depicting different locations and travel directions at which travel direction-dependent digital stories can be provided, according to some embodiments of the present invention.
  • FIG. 14 is an example of a map depicting a plurality of adjacent geofences which can be used to determine a travel direction, according to some embodiments of the present invention.
  • the word “or” is used in this disclosure in a non-exclusive sense.
  • the word “set” is intended to mean one or more, and the word “subset” is intended to mean a set having the same or fewer elements of those present in the subset's parent or superset.
  • the phrase “at least” is used herein at times merely to emphasize the possibility that other elements may exist besides those explicitly listed. However, unless otherwise explicitly noted (such as by the use of the term “only”) or required by context, non-usage herein of the phrase “at least” nonetheless includes the possibility that other elements may exist besides those explicitly listed.
  • the phrase, ‘based at least upon A’ includes A as well as the possibility of one or more other additional elements besides A.
  • the phrase, ‘based upon A’ includes A, as well as the possibility of one or more other additional elements besides A.
  • the phrase, ‘based only upon A’ includes only A.
  • the phrase ‘configured at least to A’ includes a configuration to perform A, as well as the possibility of one or more other additional actions besides A.
  • the phrase ‘configured to A’ includes a configuration to perform A, as well as the possibility of one or more other additional actions besides A.
  • the phrase, ‘configured only to A’ means a configuration to perform only A.
  • program in this disclosure should be interpreted as a set of instructions or modules that may be executed by one or more components in a system, such as a controller system or data processing device system, in order to cause the system to perform one or more operations.
  • the set of instructions or modules may be stored by any kind of memory device, such as those described subsequently with respect to FIG. 1 , FIG. 2 , and FIG. 3 .
  • this disclosure may describe or similarly describe that the instructions or modules of a program are configured to cause the performance of an action.
  • the phrase “configured to” in this context is intended to include at least (a) instructions or modules that are presently in a form executable by one or more data processing devices to cause performance of the action (e.g., in the case where the instructions or modules are in a compiled and unencrypted form ready for execution), and (b) instructions or modules that are presently in a form not executable by the one or more data processing devices, but could be translated into the form executable by the one or more data processing devices to cause performance of the action (e.g., in the case where the instructions or modules are encrypted in a non-executable manner, but through performance of a decryption process, would be translated into a form ready for execution).
  • the word “module” may be defined as a set of instructions.
  • device and the phrase “device system” both are intended to include one or more physical devices or sub-devices (e.g., pieces of equipment) that interact to perform one or more functions, regardless of whether such devices or sub-devices are located within a same housing or different housings.
  • device may equivalently be referred to as a “device system”.
  • phrase “in response to” may be used in this disclosure.
  • this phrase might be used in the following context, where an event A occurs in response to the occurrence of an event B.
  • such phrase includes, for example, that at least the occurrence of the event B causes or triggers the event A.
  • FIG. 1 schematically illustrates a personalized travel experience generation system 100 , according to some embodiments of the present invention.
  • the system 100 may include a data processing device system 110 , a data input-output device system 120 , and a processor-accessible memory device system 130 .
  • the processor-accessible memory device system 130 and the data input-output device system 120 are communicatively connected to the data processing device system 110 .
  • the data processing device system 110 includes one or more data processing devices that implement or execute, in conjunction with other devices, such as those in the system 100 , methods of various embodiments of the present invention, including the example methods of FIG. 4 , FIG. 5 , and FIG. 12 described herein.
  • Each of the phrases “data processing device”, “data processor”, “processor”, and “computer” and the like is intended to include any data processing device, such as a central processing unit (“CPU”), a desktop computer, a laptop computer, a mainframe computer, a tablet computer such as an iPadTM, a personal digital assistant, a cellular phone, a mobile device, a smart phone, or any other device for processing data, managing data, or handling data, whether implemented with electrical, magnetic, optical, biological components, or otherwise.
  • CPU central processing unit
  • a desktop computer such as a laptop computer, a mainframe computer, a tablet computer such as an iPadTM, a personal digital assistant, a cellular phone, a mobile device, a smart phone, or any other device for processing
  • the processor-accessible memory device system 130 includes one or more processor-accessible memory devices configured to store program instructions and other information, including the information and program instructions needed by a data processing device system to execute the methods of various embodiments, including the example methods of FIG. 4 , FIG. 5 , and FIG. 12 described herein.
  • each of the steps illustrated in the example methods of FIG. 4 , FIG. 5 , and FIG. 12 may represent program instructions stored in the processor-accessible memory device system 130 and configured to cause a data processing device system to execute the respective step.
  • the processor-accessible memory device system 130 may be a distributed processor-accessible memory device system including multiple processor-accessible memory devices communicatively connected to the data processing device system 110 via a plurality of computers and/or devices.
  • the processor-accessible memory device system 130 need not be a distributed processor-accessible memory system and, consequently, may include one or more processor-accessible memory devices located within a single data processing device.
  • processor-accessible memory is intended to include any processor-accessible data storage device, whether volatile or nonvolatile, electronic, magnetic, optical, or otherwise, including but not limited to, registers, floppy disks, hard disks, Compact Discs, DVDs, flash memories, ROMs, EEPROMs, and RAMs.
  • processor-accessible memory and “processor-accessible memory device” is intended to include or be a processor-accessible (or computer-readable) data storage medium.
  • each of the phrases “processor-accessible memory” and “processor-accessible memory device” is intended to include or be a non-transitory processor-accessible (or computer-readable) data storage medium.
  • the memory device system 130 may be considered to include or be a non-transitory processor-accessible (or computer-readable) data storage medium system.
  • the memory device system 130 may be considered to include or be a non-transitory processor-accessible (or computer-readable) storage medium system.
  • the phrase “communicatively connected” is intended to include any type of connection, whether wired or wireless, between devices, data processors, or programs in which data may be communicated. Further, the phrase “communicatively connected” is intended to include a connection between devices or programs within a single data processor, a connection between devices or programs located in different data processors, and a connection between devices not located in data processors at all.
  • the processor-accessible memory device system 130 is shown separately from the data processing device system 110 and the data input-output device system 120 , one skilled in the art will appreciate that the processor-accessible memory device system 130 may be located completely or partially within the data processing device system 110 or the data input-output device system 120 .
  • data input-output device system 120 is shown separately from the data processing device system 110 and the processor-accessible memory device system 130 , one skilled in the art will appreciate that such system may be located completely or partially within the data processing system 110 or the processor-accessible memory device system 130 , depending upon the contents of the input-output device system 120 . Further still, the data processing device system 110 , the data input-output device system 120 , and the processor-accessible memory device system 130 may be located entirely within the same device or housing or may be separately located, but communicatively connected, among different devices or housings.
  • the system 100 of FIG. 1 may be implemented by a single application-specific integrated circuit (ASIC) in some embodiments.
  • ASIC application-specific integrated circuit
  • the data input-output device system 120 may include a mouse, a keyboard, a touch screen, a computer, a processor-accessible memory device, a network-interface-card or network-interface circuitry, or any device or combination of devices from which a desired selection, desired information, instructions, or any other data is input to the data processing device system 110 .
  • the data input-output device system 120 may include a user-activatable control system that is responsive to a user action.
  • the data input-output device system 120 may include any suitable interface for receiving a selection, information, instructions, or any other data from other devices or systems described in various ones of the embodiments.
  • the data input-output device system 120 also may include an image generating device system, a display device system, an audio generating device system, an audio transducer, a computer, a processor-accessible memory device, a network-interface-card or network-interface circuitry, or any device or combination of devices to which information, instructions, or any other data is output by the data processing device system 110 .
  • the input-output device system 120 may include any suitable interface for outputting information, instructions, or data to other devices and systems described in various ones of the embodiments. If the input-output device system 120 includes a processor-accessible memory device, such memory device may or may not form part or all of the memory device system 130 .
  • the user interfaces of at least FIG. 7A , 7 B, 8 A, 8 B, 9 , 10 , or a combination thereof may be implemented as part of the data input-output device system 120 , according to various ones of some embodiments of the present invention.
  • FIG. 2 is a block diagram of a particular implementation of the system of FIG. 1 in accordance with some embodiments of the present invention.
  • a system 214 for providing location-based digital stories to a plurality of users of mobile devices at a plurality of locations.
  • digital story relates to, among other things, a telling of a story with any of a variety of digital multimedia types, including digital audio, digital graphics images, including digital still photographs, and digital video images and animations. It will be understood that a digital story may relate to, for example, travel information, historic information or business information.
  • the phrase digital story experience relates to, among other things, the presentation of a digital story on a device, such as a smart phone or tablet computer, using digital audio, or digital still images including graphics, or digital video images, or a combination of digital audio, digital still images, and digital video images.
  • a first mobile device such as smart phone 300 A located at a first location A
  • a second mobile device such as smart phone 300 B, located at a second location B
  • the cellular provider network 240 provides both voice and data communications using transmission devices located at cell towers throughout a region.
  • the cellular provider network 240 is communicatively connected to a communication network 250 , such as the Internet.
  • each mobile device such as smart phone 300 A
  • the smart phone 300 A can be used to present a digital story to a single user or to a group of users who are viewing a display of the smart phone 300 A, or listening to audio provided by the smart phone 300 A.
  • the user or group of users may be situated in a vehicle such as a car, for example, and the digital story can be provided by the vehicle's audio system using, for example, a BluetoothTM connection to transmit the audio from the smart phone 300 A to the vehicle's audio system as is well-known in the art.
  • system 214 typically includes many other mobile devices, in addition to smart phone 300 A and smart phone 300 B. It will be understood that the system 214 can include multiple cellular provider networks 240 , for example networks provided by companies such as Verizon, AT&T, and Sprint, which can be communicatively connected to the communication network 250 .
  • cellular provider networks 240 for example networks provided by companies such as Verizon, AT&T, and Sprint, which can be communicatively connected to the communication network 250 .
  • System 214 also includes one or more computers 218 which communicate with the communication network 250 and service provider 280 via a communication service provider (CSP) 220 .
  • CSP communication service provider
  • computer 218 enables remote users, who might not be able to travel to the locations where the location-specific stories are provided, to obtain a virtual experience from their home.
  • the user of one of the computers 218 can use a computer mouse to change their virtual location on a digital map displayed in a window on the display of the computer 218 .
  • the computer 218 can then be used, rather than one of the smart phones 300 A, 330 B, to provide a virtual digital story experience to a remote user, who may be located in another country, for example.
  • the communications network 250 enables communication with a service provider 280 .
  • Service provider 280 includes a web server 282 for interfacing with communications network 250 .
  • web server 282 transfers data to a computer system 286 which manages data associated with various customers and digital story content associated with one or more themes at a plurality of locations.
  • system 214 can include a plurality of service providers 280 , which provide different services and can support different regions of the world.
  • the computer system 286 includes an account manager 284 , which runs software to permit the creation and management of individual customer (e.g. user) accounts, including user profiles, which are stored in customer database 288 .
  • customer database 288 provides a network-accessible storage device system which stores profiles for a plurality of users of mobile devices, such as smart phones 300 A and 300 B.
  • the user profile information stored in customer database 288 can include personal information such as the user's nickname, full name and address, demographic information, and interests.
  • the demographic information in the user profile can include the approximate age of the user, whether the user is male or female, or a language preference of the user, since the user may be visiting from another country.
  • the user profile information stored in customer database 288 can also include billing information such as credit card information, and authorization information that controls access to the information in the customer database by third parties.
  • the user profile information stored in customer database 288 includes data which indicates which digital stories have been experienced by the user, including the theme, location, and the date and time that the digital story was presented to the user, as will be described later in reference to FIG. 4 .
  • the account manager 284 also permits the uploading and management of collections of digital story content data for providing digital story experiences, such as digital audio recordings, digital still images, and digital video images associated with various story themes and locations, which is stored in content database 290 .
  • content database 290 provides an example of a network-accessible storage device system which stores data for providing a plurality of location-specific digital stories related to a common theme at a plurality of locations.
  • computers 218 are used by content curators associated with a plurality of venues, to provide, manage, and update the digital story content associated with location-specific digital stories associated with the venues, which is stored in a content database 290 .
  • users of mobile devices such as smart phones 300 A and 300 B, capture digital images during a digital story experience at one or more locations.
  • the captured digital images are uploaded and stored in the customer database 288 .
  • Content database 290 stores data which identifies the geographic locations associated with location-specific digital stories that can be provided using the system depicted in FIG. 2 .
  • the geographic location data can use, for example, GPS coordinate boundaries of an area, such as a geofence, or object identifying feature points in images captured in an area.
  • the geographic location data can also use one or more identifiers for wireless communications antennas, which are located in the geographic area associated with the location-specific digital story.
  • the content database 290 also stores guidance information, which is used to suggest additional locations for digital story experiences that may be of interest to users, and to guide them to the suggested locations.
  • the guidance information also provides guidance to locations which are likely to be considered to be good “photo spots” by the particular user of one of the smart phones 300 A, 300 B.
  • the guidance information includes at least one image related to the suggested location.
  • the guidance can include a photo of a particular object, along with a map or an audio or text message that provides a general direction, or other clues, for locating the object.
  • the guidance can also include text or graphics which instruct the user to capture an image of their group near the object, and to upload the captured image to the service provider 280 .
  • guidance for suggested digital story experience locations is provided in a manner so as to dynamically alter the experience responsive to user-captured images or other input received from the user during the digital story experience.
  • the digital story experience automatically adapts to a particular user's situation and conditions. For example, an uploaded digital still image captured by a user at one point in the digital story experience can indicate that the user is accompanied by children. This can result modifications to the digital story experience in order to be more suitable for a younger audience. In another example, an uploaded digital still image captured by a user can indicate that it is raining or snowing. As a result, the digital story experience can be tailored to indoor venues.
  • the computer system 286 includes a processor 292 , which can be used to analyze the pixel data of some of the customer images which are uploaded and stored in the customer database 288 .
  • the processor 292 can analyze the pixel data in order to detect faces in one or more customer images using a variety of known face detection algorithms.
  • the face detection algorithm determines the number of faces that can be detected in an image, in order to determine how many people are depicted in the image.
  • the face detection algorithm determines if the detected faces are female faces or male faces.
  • the face detection algorithm determines the approximate ages of the people whose faces have been detected.
  • approximate age relates to categorizing one or more faces into broad, age-related categories. These approximate age categories can include, for example, babies, young children, teens, younger adults, and older adults (i.e. senior citizens).
  • the processor 292 in the computer system 286 can analyze the pixel data of some of the customer images in order to determine whether one or more landmarks are depicted in the images.
  • image recognition algorithms are used, for example, as part of the Google GogglesTM Application (APP) for the Android mobile platform, which is available from Google, Mountain View, Calif.
  • the processor 292 in the computer system 286 creates the information needed to provide a unique photo product for a particular user of one of the smart phones 300 A, 300 B by incorporating images captured by the user during one or more digital story experiences with pre-stored information, such as professional images and textual descriptions.
  • pre-stored information such as professional images and textual descriptions.
  • This enables a photo product to be automatically created by placing the user-captured images in predetermined locations in the photo product, so that they are associated with the pre-stored information. For example, a first image captured by the user near the Lincoln Memorial in Washington D.C. can be associated with pre-stored information which describes the romance of Abraham Lincoln and provides an image related to his Gettysburg Address speech.
  • a second image, captured by the user near the White House, can be associated with pre-stored information that describes or depicts the current president. This enables a photo product to be automatically produced using the user-captured images at two different locations, along with the pre-stored information associated with the two different locations.
  • the processor 292 in the computer system 286 modifies the appearance of one or more of the captured digital images, so that it has a more suitable appearance when incorporated into the photo product.
  • faces in the captured digital image can be detected, and the processor 292 can crop the digital image to enlarge the size of the faces and remove some of the distracting background surrounding the face.
  • captured digital images can be processed by the processor 292 to provide a different image appearance.
  • captured digital images can be processed so that the newly captured images appear to be older photographs, such as daguerreotypes, so that they have a more suitable appearance when positioned in a photo product in association with an image related to the Gettysburg Address.
  • the captured digital images can be processed to provide an image having a different color tint, contrast, or external shape, so that it has a more suitable appearance when positioned in a photo product as part of an advertisement for a product or service.
  • the captured digital images can be processed to provide a cartoon effect or a coloring book effect so that they have a more suitable appearance when positioned in a children's photo product in association with pre-stored cartoons or as part of a page which provides a “coloring book” for a child.
  • captured digital images can be processed by the processor 292 to provide a different image appearance in response to the image content of the captured image.
  • the processor 292 can determine the location of multiple faces within the image and automatically crop the captured digital image using different aspect ratios for different captured images in order to produce a more suitable appearance in the photo product.
  • the captured digital images can be processed by the processor 292 to provide a different image appearance in response to the location where the image was captured.
  • the processor 292 can provide a “cartoon” effect for images captured in a particular location, such as images captured in a particular park or playground.
  • the captured digital images can be processed by the processor 292 to provide a different image appearance in response to both the image content of the captured image and the location where the image was captured.
  • the processor 292 can provide a color-based object extraction algorithm (e.g. “green screen” effect”) on images captured in a particular location when the processor 292 can determine that a background area of the captured image is a predetermined color (e.g. green).
  • the communications network 250 enables communication with a fulfillment provider 270 .
  • the fulfillment provider 270 produces and distributes enhanced photo products.
  • the fulfillment provider 270 includes a fulfillment web server 272 , and a fulfillment computer system 276 that further includes a commerce manager 274 and a fulfillment manager 275 .
  • Fulfillment requests received from service provider 280 are handled by commerce manager 274 initially before handing the requests off to fulfillment manager 275 .
  • Fulfillment manager 275 determines which equipment is used to fulfill the ordered good(s) or services such as a digital printer 278 or a DVD writer 279 .
  • the digital printer 278 represents a range of color hardcopy printers that can produce various photo products, including prints and postcards. The hardcopy prints can be of various sizes, including “poster prints”, and can be sold in frames.
  • the DVD writer 279 can produce CDs or DVDs, for example PictureCDs, having digital still and video images and application software for using the digital images.
  • the photo products are provided to the user of the smart phones 300 A, 300 B, or to a recipient designated by the user of the smart phones 300 A, 300 B.
  • the photo products are provided using a transportation vehicle 268 .
  • the photo products are provided at a retail outlet, for pickup by the user of the smart phones 300 A, 300 B, or by a designated recipient.
  • system 214 also includes one or more kiosk printers 224 which communicate with the communication network 250 and service provider 280 via a communication service provider (CSP) 222 .
  • CSP communication service provider
  • This enables printed photo products, created by the service provider 280 using digital images captured by smart phones 300 A, 300 B, to be provided at retail establishments.
  • the retail establishments which can be for example gift shops, may be located at or near some of the locations where the location-specific digital story experiences are provided.
  • the user of the smart phones 300 A, 300 B receives the photo product at a discount, or free of charge, in order to encourage the user to enter the store where they will potentially purchase other items.
  • the photo product includes advertising of merchants which are located near the location of the fulfillment provider 270 or the kiosk printer 224 .
  • the service provider 280 or the fulfillment provider 270 can create examples of various photo products that can be provided by the fulfillment provider 270 .
  • the examples can be communicated to the smart phone 300 or the customer computer 218 , where the examples can be displayed to the user.
  • the customer database 288 at the service provider 280 stores user billing information.
  • the billing information can include a payment identifier for the user, such as a charge card number, expiration date, user billing address, or any other suitable identifier.
  • the customer database 288 also provides long-term storage of the uploaded images for some or all of the users.
  • stored user images and digital story content is accessible (e.g., viewable) via the Internet by authorized users.
  • the service provider account manager 284 can communicate with a remote financial institution (not shown) to verify that the payment identifier (e.g., credit card or debit card number) provided by the customer is valid, and to debit the account for the purchase.
  • the payment identifier e.g., credit card or debit card number
  • the price of the photo product can be added to the user's monthly bill paid to the service provider 280 or to their mobile phone operator.
  • the functions of the service provider 280 and the fulfillment provider 270 can be combined, for example, by using a common web server for both web server 282 and web server 272 or by combining the functions of the account manager 284 , the commerce manager 274 , and the fulfillment manager 275 . It will be understood that in some embodiments, the customer database 288 or the content database 290 can be distributed over several computers at the same physical site, or at different sites.
  • any of various combinations of the components of FIG. 2 may form all or part of the various components of FIG. 1 , according to respective various embodiments of the present invention.
  • the system 100 corresponds only to the smart phone 300 A or the smart phone 300 B.
  • the system 100 corresponds to the service provider 280 , where the processor 292 may correspond to the data processing device system 110 , the databases 288 and 290 maybe stored in the memory device system 130 , the account manager and web server may be applications stored in the memory device system 130 , and the communication network 250 may interface with the input-output device system 120 .
  • the system 100 corresponds to the smart phone 300 A and the service provider 280 , such that, for example, the CPU of the smart phone 300 A and the processor 292 both form part of the data processing device system 110 .
  • the system 100 corresponds to the fulfillment provider 270 .
  • the system 100 corresponds to the entirety of the system 214 . Accordingly, it can be seen that the present invention is not limited to any particular correspondence configuration between the system of FIG. 1 and the system of FIG. 2 . The same is true with respect to any particular correspondence configuration between the system of FIG. 1 and the system of FIG. 3 , which will now be discussed.
  • FIG. 3 depicts a block diagram of a smart phone 300 used in the system of FIG. 2 , according to some embodiments of the present invention. It will be understood that other types of mobile devices, such as tablet computers and wireless digital cameras, can be used in the system described in reference to FIG. 2 .
  • the smart phone 300 is a portable, battery operated device, small enough to be easily handheld by a user.
  • the smart phone 300 can utilize an operating system such as the iOS operating system developed by Apple Inc, Sunnyvale, Calif. or the Android mobile platform, developed by Google, Mountain View, Calif.
  • the operating system can be stored in firmware memory 328 and utilized by digital processor 320 (which may, e.g., form at least part of the data processing device system 110 in FIG. 1 ).
  • the smart phone 300 can run applications (i.e. “apps”) which are pre-installed when the smart phone is purchased, or are downloaded from the service provider 280 .
  • the digital processor 320 may use, for example, the Android software stack, which includes a Linux-based operating system, middleware, and applications. This permits additional software applications (“apps”) to be downloaded from the service provider 280 , stored in the firmware memory 328 , and used to provide various functions, including the digital story experiences to be described in reference to FIG. 4 .
  • the smart phone 300 includes a camera module including a lens 304 which focuses light from a scene (not shown) onto an image sensor array 314 of a CMOS image sensor 310 .
  • the image sensor array 314 can provide color image information using the well-known Bayer color filter pattern.
  • the image sensor array 314 is controlled by timing generator 312 , which also controls a flash 302 in order to illuminate the scene when the ambient illumination is low.
  • the image sensor array 314 can have, for example, 2560 columns ⁇ 1920 rows of pixels.
  • the smart phone 300 can also capture video clips by summing multiple pixels of the image sensor array 314 together (e.g. summing pixels of the same color within each 4 column ⁇ 4 row area of the image sensor array 314 ) to create a lower resolution video image frame.
  • the video image frames are read from the image sensor array 314 at regular intervals, for example using a 30 frame per second readout rate.
  • the analog output signals from the image sensor array 314 are amplified and converted to digital data by the analog-to-digital (A/D) converter circuit 316 in the CMOS image sensor 310 .
  • the digital data is stored in a DRAM buffer memory 318 and subsequently processed by a digital processor 320 controlled by the firmware stored in firmware memory 328 , which can be flash EPROM memory.
  • the digital processor 320 includes a real-time clock 324 , which keeps the date and time even when the smart phone 300 and digital processor 320 are in their low power state.
  • the digital processor 320 produces digital images that are stored as digital image files using image/data memory 330 .
  • the phrase “digital image” or “digital image file”, as used herein, refers to any digital image or digital image file, such as a digital still image or a digital video file.
  • the processed digital image files are stored in the image/data memory 330 , along with the date/time that the image was captured provided by the real-time clock 324 and the location information provided by a location determination unit, such as GPS receiver 360 .
  • the digital processor 320 performs color interpolation followed by color and tone correction, in order to produce rendered sRGB image data. In some embodiments, the digital processor 320 can also provide various image sizes selected by the user. In some embodiments, rendered sRGB image data is then JPEG compressed and stored as a JPEG image file in the image/data memory 330 . In some embodiments, the JPEG file uses the so-called “Exif” image format. This format includes an Exif application segment that stores particular image metadata using various TIFF tags. Separate TIFF tags are used to store the date and time the picture was captured and the GPS co-ordinates, as well as other camera settings such as the lens f/number.
  • the CMOS sensor 310 is used to capture QR codes or bar codes which are located at a visitor information center or at an experience location.
  • the captured image of the QR code or the bar code can be used, for example, to determine the URL for an app which is downloaded to the smart phone 300 from the service provider 280 in order to implement some or all of the steps which will be described in relation to FIG. 4 , FIG. 5 , and FIG. 12 .
  • the captured image of the QR code or the bar code can be used to initiate the purchase of various products or services of interest to the visitor at an experience location.
  • the digital processor 320 also creates a low-resolution “thumbnail” size image or “screennail” size image, which can be stored in RAM memory 322 and supplied to a color display 332 , which can be, for example, an active matrix LCD or organic light emitting diode (OLED) touch screen display. After images are captured, they can be reviewed on the color LCD image display 332 by using the thumbnail image data.
  • a low-resolution “thumbnail” size image or “screennail” size image can be stored in RAM memory 322 and supplied to a color display 332 , which can be, for example, an active matrix LCD or organic light emitting diode (OLED) touch screen display. After images are captured, they can be reviewed on the color LCD image display 332 by using the thumbnail image data.
  • the graphical user interface displayed on the color display 332 is controlled by user controls 334 .
  • the graphical user interface enables the user to control the functions of the smart phone 300 , for example, to make phone calls, to launch and control apps, to capture images, and to send and view text messages, email messages and videos.
  • User controls 334 can include a touch screen overlay on the color display 332 , as well as buttons, keyboard switches, rocker switches, or joysticks.
  • the user controls 334 can include voice recognition or image based gesture recognition.
  • An audio codec 340 connected to the digital processor 320 receives an audio signal from a microphone 342 and provides an audio signal to a speaker 344 and a headphone jack (not shown). These components can be used both for telephone conversations and to record and playback digital audio. The digital audio can be played back as part of a digital story experience, to be described later in reference to FIG. 4 .
  • a vibration device (not shown) can be used to provide a silent (e.g., non-audible) notification of an incoming phone call or message, or to inform a user that they have entered a location, such as a geofence, where a digital story experience can be provided.
  • a digital audio signal can be provided from the digital processor 320 to the wireless modem 350 , which can transmit the digital audio signal over an RF channel 352 using, for example, the well-known Bluetooth protocol.
  • the digital audio signal can be received by a wireless modem in a vehicle audio system (not shown), which can amplify and play the audio using speakers installed in the vehicle. This permits the driver and passengers in the vehicle to listen to the audio that is presented as part of the digital story experience.
  • a memory (which may, e.g., form at least part of the memory device system 130 in FIG. 1 ) in the smart phone 300 can be used to store a variety of music using standard audio files, such as the well-known MP3 audio format, so that the smart phone 300 serves as a music player.
  • music files consistent with the theme of the digital story experience can be automatically downloaded from the service provider 280 and stored in firmware memory 328 . The music files can then be automatically played when the smart phone 300 is not at a digital story experience location, as will be described later in reference to step 890 in FIG. 12 .
  • an MP3 audio file for the song “John Brown's body” can be automatically downloaded when the “Fight for rights” theme is selected, as will be described later in reference to FIG. 6A , since the song “John Brown's body” is consistent with the “fight for rights” theme.
  • the song “(Get your kicks on) Route 66” is consistent with a digital story experience along historic route 66.
  • a dock interface 362 can be used to connect the smart phone 300 to a dock/charger 364 , which is optionally connected to customer computer 218 .
  • the dock/recharger 364 can be used to recharge the batteries (not shown) in the smart phone 300 .
  • the dock interface 362 can conform to, for example, the well-know USB interface specification.
  • the interface between the smart phone 300 and the customer computer 218 can be a wireless interface, such as the well-known Bluetooth wireless interface or the well-know 802.11 wireless interface.
  • the dock interface 362 can be used to transfer data for providing a plurality of location-specific digital stories to the camera phone 300 prior to leaving on a vacation trip.
  • the digital processor 320 is communicatively connected to a wireless modem 350 , which enables the digital smart phone 300 to transmit and receive information via an RF channel 352 .
  • the wireless modem 350 communicates over a radio frequency (e.g. wireless) link with the cellular provider network 240 , described earlier in reference to FIG. 2 , which can utilize, for example, a CDMA network, a 3GSM, a 4 GSM network, or other wireless communication networks.
  • the wireless modem 350 also communicates using local area wireless interface standards, such as the well-know 802.11 interface standards or the well-known Bluetooth standard.
  • digital processor 320 because it may form at least part of the data processing device system 110 , can be provided using a single programmable processor or by using multiple programmable processors, including one or more digital signal processor (DSP) devices.
  • the digital processor 320 can be provided by custom circuitry (e.g., by one or more custom integrated circuits (ICs) designed specifically for use in smart phones), or by a combination of programmable processor(s) and custom circuits, just like the data processing device system 110 .
  • communicative connections between the digital processor 320 and some or all of the various components shown in FIG. 3 can be made using a common data bus.
  • the connection between the digital processor 320 , the DRAM buffer memory 318 , the image/data memory 330 , and the firmware memory 328 can be made using a common data bus.
  • FIG. 4 is a flow diagram depicting steps for providing location-specific digital story experiences related to a common theme at different locations, according to some embodiments of the present invention.
  • the steps are performed by the service provider 280 in FIG. 2 .
  • some or all of the steps are performed by the smart phone 300 in FIG. 3 .
  • step 400 of FIG. 4 data for a plurality of location-specific digital stories related to at least one common theme is stored on a network-accessible storage device system, such as content database 290 in FIG. 2 .
  • the digital stories are stored in association with GPS information, such as geofences, which indicate the locations where the digital stories are to be presented.
  • GPS information such as geofences
  • a number of variations of the digital story for the same theme are stored for each location. This is done because some users at any particular location will have already experienced digital stories for the same theme at one or more other locations, while other users will experience their first digital story for the theme at this particular location.
  • location-specific digital stories are stored for a plurality of different themes in the content database 290 .
  • the plurality of themes for locations in the Rochester, N.Y. region could include digital stories for a first theme related to the “Fight for Rights”, for a second theme related to “Life along the Erie Canal”, and for a third theme related to “Winemaking in the Finger Lake region”.
  • FIG. 6 is an example of a map 530 depicting different experience locations 531 , 532 , and 533 at which location-specific digital story experiences related to a common theme can be provided, according to some embodiments of the present invention.
  • the common theme is a “fight for rights” theme.
  • the map identifies three locations in upstate New York, including experience location “1” 531 near Seneca Falls, experience location “2” 532 near Skaneateles, and experience location “3” 533 near Cortland.
  • profiles are developed for a plurality of users of mobile devices, such as the users of smart phones 300 A, 300 B in FIG. 2 , and stored in customer database 288 .
  • the profiles indicate whether each user has been presented with one or more digital stories at one or more digital story experience locations. If a user has been presented with a digital story, the theme, location, and date/time of each digital story presentation are recorded in their user profile.
  • the user profile includes information derived from responses given by the user during their digital story experiences. For example, the user may have been asked to select a particular character from a plurality of characters that could be used to present stories.
  • the user profile stores data which identifies the user-selected character, so that the same character can automatically be featured in a related digital story experience at another location.
  • the user profile is stored in a memory of the smart phone 300 , such as image/data memory 330 or firmware memory 328 .
  • the current location of the mobile device for a particular user is determined. This can be done, for example, by using the GPS receiver 360 in the smart phone 300 (see FIG. 3 ) to determine the GPS coordinates of the smart phone 300 , and by using the digital processor 320 in the smart phone 300 to communicate the GPS coordinates to the service provider 280 using the wireless modem 350 . It will be understood that in some embodiments, the GPS coordinates of experience locations can be provided by the service provider 280 and stored in a memory of the smart phone 300 (such as image/data memory 330 or firmware memory 328 ) so that the digital processor 320 in the smart phone 300 can determine if the mobile phone 300 is at an experience location.
  • the process proceeds to provide directions step 420 .
  • directions step 420 directions are provided in order to direct the user to one or more nearby experience locations where a digital story experience can be provided.
  • the map shown in FIG. 6 can be used to provide directions to the user.
  • standard mapping programs, such as Google Maps, already installed on the smart phone 300 can be used to provide directions to the user.
  • images showing a landmark can be used to provide directions to the user.
  • the smart phone 300 provides a menu of themes for location-specific stories that can be provided at nearby experience locations, and the user is permitted to select a specific theme. Once the user selects the specific theme, the user is directed to one or more nearby locations associated with the user-selected theme. For example, the user may select a specific theme using their smart phone 300 before they begin driving their vehicle to a nearby experience location associated with the theme they have selected. As the user drives their vehicle, the smart phone 300 can direct the user to one of the nearby experience locations by displaying a map and providing audio guidance for which roads to take and where to turn.
  • the associated digital story can then be automatically provided, for example by presenting an audio signal which plays over the vehicle's stereo audio system using a Bluetooth connection between the smart phone 300 and the vehicle's stereo audio system.
  • select location-specific story step 425 one of a plurality of possible location specific digital stories is selected and provided to the user's mobile device, for example by transmitting digital data for the selected digital story from the service provider 280 to the user's smart phone 300 .
  • the plurality of possible location specific digital stories is stored in a memory of the smart phone 300 (such as image/data memory 330 or firmware memory 328 ) at an earlier time, and is selected by the digital processor 320 .
  • FIG. 5 is a flow diagram depicting steps for the select location-specific story step 425 in FIG. 4 , according to some embodiments of the present invention.
  • access user profile step 500 the stored user profile for the user is read from a memory which stores profiles for a plurality of users of mobile devices, such as customer database 288 in FIG. 2 .
  • the stored user profile provides the user's history concerning stories that the user viewed at earlier times.
  • viewed earlier story test 505 a determination is made as to whether the user profile indicates that the user has already viewed a location-specific digital story at another experience location which relates to the theme of a digital story available at the current experience location.
  • first digital story step 510 data for a first digital story which can be presented at the user's current location is provided.
  • the data for the first digital story is transmitted from the service provider 280 to the user's mobile device, such as smart phone 300 A.
  • the first location-specific story is automatically initiated when the user reaches the location associated with the story, for example when the user enters the geofence for the particular digital story experience location.
  • the location-specific story is determined by presenting to the user a menu which permits the user to select one of a plurality of first digital stories that can be provided at the current location.
  • the plurality of stories can include stories on the same general theme (e.g. a women's rights theme) which are narrated by different characters, such as a young girl character and an older woman character.
  • the plurality of stories can also include stories having different themes (e.g. a first story having a women's rights theme and a second story related to the Erie Canal).
  • the first location-specific story is selected responsive to demographic information stored in the user profile.
  • the user profile can store the preferred (e.g., native) language of the user, and the user profile can be accessed in order to provide a first digital story which is presented in the preferred language of the user.
  • the user profile can store the approximate age of the user, and the user profile can be accessed in order to provide a first digital story which is appropriate for the age of the user. For example, different first digital stories may be provided to children, teenagers, adults, and senior citizens.
  • FIG. 7A depicts an example of a user interface screen for selecting a theme for location-specific stories using color display 332 of smart phone 300 , according to some embodiments of the present invention.
  • Story greeting window 612 includes a user photo 614 , which can be displayed using image data from the stored user profile for the particular user of the smart phone 300 .
  • Story greeting window 612 also includes a message window 616 which asks the user to select one of the stories in story selection window 620 .
  • Story selection window 620 permits the user to select a wine story 624 having wineries as the theme, as depicted using a winery story image 622 .
  • Story selection window 620 also permits the user to select an Amelia story 634 having the fight for rights as the theme, which is told by a historical figure named Amelia Jenks Bloomer, as depicted using Amelia story image 632 .
  • the theme and character of the first digital story are introduced, in order for the user to understand the overall context of the story. For example, if the theme of the story is the women's rights movement, the first digital story can discuss the historic context of the story and provide background information on the character who is narrating the story.
  • FIG. 7B depicts a user interface screen that begins to introduce the theme and character of the Amelia fight for rights story, which is one of the themes that can be selected by the user, using the user interface screen depicted in FIG. 7A .
  • the user interface screen depicted in FIG. 7B includes the Amelia story image 632 , since the story will be told by the historical figure Amelia Jenks Bloomer.
  • the user interface screen depicted in FIG. 7B also includes a story introduction window 640 , which provides text that introduces the theme and the main character of the story, who is named Amelia.
  • the digital story told by Amelia can include still and video images, as well as audio narration, sound effects, and music.
  • the text in story introduction window 640 could be provided as an audio narration.
  • the user interface screen depicted in FIG. 7B also includes a map 642 depicting other locations where the user can view location-specific stories concerning the same theme (e.g. the fight for rights theme).
  • the user interface screen depicted in FIG. 7B also includes a “Follow Amelia” icon 646 which the user can select to continue experiencing Amelia's story, and an “Another theme” icon 648 which the user can select in order to view a menu that permits the user to select another story theme or character.
  • step 520 data for a second digital story which can be presented at the user's current location is provided.
  • the data for the second digital story is transmitted from the service provider 280 to the user's mobile device, such as smart phone 300 A.
  • the data for the second digital story is downloaded at an earlier time (e.g. when the data for the first digital story was downloaded at another location) and stored in a memory of the user's mobile device (e.g. image/data memory 330 or firmware memory 328 ) in the smart phone 300 in FIG. 3 ) so that the data for the second digital story is immediately available when the user moves to the experience location associated with the digital story (i.e. the current location).
  • the theme and character which were used in the first digital story are continued, so that the digital story provided at the current location builds on the story already provided at the earlier experience location or locations.
  • the second digital story can describe how food and materials which were loaded onto the barge at a different location, which was visited by the user at an earlier time, are being unloaded from the same barge at the current location, so that the food and materials can be sold at a store near the current location.
  • the location-specific digital story selected in step 425 is presented to the user.
  • the digital story can be presented to the user using a variety of story-telling methods, such as audio stories, text-based stories, video stories, and augmented-reality stories.
  • the user selects a preferred story-telling method from a menu offering a variety of choices, which is stored in their user profile during store user profiles step 405 .
  • the user can select a text based story, rather than a story that includes audio, if they are hearing-impaired or concerned about distracting others.
  • the digital story can be told primarily using audio narration, sound effects, and music. However, still and video images could also be provided, for viewing by passengers in the vehicle.
  • the presentation of a digital story is initiated when the user's mobile device, such as smart phone 300 , enters a geofence and continues even if the mobile device leaves the geofence, until the presentation is completed (e.g. until a complete audio clip has been played). In some other embodiments, the presentation is terminated when the mobile device leaves the geofence.
  • the digital story associated with the geofence can include associated data which indicates whether or not the digital story presentation should continue if the mobile device leaves the geofence.
  • the digital story-telling method can be automatically selected by a processor in system 214 (such as processor 292 at service provider 280 or digital processor 320 in smart phone 300 ) based on measurements performed by one of the mobile devices, such as smart phone 300 .
  • the digital processor 320 in the smart phone 300 can determine the user's speed from measurements made by the GPS receiver 360 . If the user's average speed is greater than a threshold (e.g. greater than 10 miles per hour), the story-telling method can be automatically switched to an audio mode, since the user is likely in a moving vehicle.
  • the digital processor 320 in the smart phone 300 can determine the ambient noise level from measurements made using the mic 342 and audio codec 340 . If the noise level is greater than a threshold (e.g. greater than 90 dB), the story-telling method can be automatically switched to a mode which displays text subtitles, since it may be difficult for the user to hear audio messages.
  • FIG. 8A depicts a user interface screen for presenting a first location-specific digital story associated with a first theme at a first geographic location, according to some embodiments of the present invention.
  • the user interface screen example shown in FIG. 8A is used to present a first location-specific digital story 660 associated with a particular theme (i.e. the fight for rights) at the Seneca Falls experience location “1” 531 shown in FIG. 6 .
  • the first location-specific story is presented when the user selects the “Follow Amelia” icon 646 in FIG. 7B , and begins by assuming that the user has not yet viewed any digital stories associated with the fight for rights theme.
  • the digital story 660 can be provided using one or more of audio, text, graphics, still images, video images, and augmented-reality images.
  • an augmented reality character is used to present at least a portion of the digital story.
  • augmented reality techniques can be used to cause the mobile device to display one or more augmented reality images of Amelia (or some other historical character in other digital stories) along with the image or images captured by the mobile device to make the statue of Amelia 664 appear to “come to life” on the color display 332 of the user's smart phone 300 or other mobile device.
  • the augmented reality character can be used to narrate the story, or to perform an action described in the story.
  • the augmented reality character can be provided, for example, using the Aurasma augmented reality software provided by HP.
  • Aurasma can be utilized by iPhone and Android apps in order to recognize images, symbols and objects captured by the camera in the smart phone.
  • the captured images can be paired up with overlaid videos, animation, 3D still images or other date sources, known as “Auras”, and displayed to the user.
  • FIG. 8B depicts a user interface screen for presenting a second location-specific digital story associated with the first theme at the first geographic location, according to some embodiments of the present invention.
  • the user interface screen example shown in FIG. 8B is used to present a second location-specific digital story 670 associated with the same theme (i.e. the fight for rights) at the same Seneca Falls experience location “1” 531 shown in FIG. 6 .
  • the second location-specific story provides a continuation of a digital story that the user had viewed at an earlier time at a different experience location (e.g. the Cortland experience location “3” 533 in FIG. 6 ).
  • the second location-specific digital story 670 uses an audio presentation, a video presentation, or a text presentation (which begins “I′m glad you've followed me here to Seneca Falls”) which is different than the first location-specific digital story 660 (which begins “Welcome! Let me begin the story of fighting for rights here in Seneca Falls”) described earlier in reference to FIG. 8A .
  • FIG. 9 depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a second geographic location, according to some embodiments of the present invention.
  • the user interface screen example shown in FIG. 9 is used to present a location-specific digital story 680 associated with the same theme (i.e. the fight for rights) at a second experience location, which is the Skaneateles experience location “2” 532 in FIG. 6 .
  • This location-specific digital story provides a continuation of the digital story that the user began in Seneca Falls experience location “1” 531 .
  • the digital story 680 relates to a house 682 which is historically important, since it was part of the underground railroad which allowed slaves to escape from the United States to Canada.
  • augmented reality techniques can be used to reveal a virtual interior of the house 682 on the color display 332 of the user's smart phone 300 .
  • the virtual interior can be used to help demonstrate the historical importance of the house 682 , by showing rooms where the Fuller Family lived, and revealing secret areas where slaves could be hidden.
  • FIG. 10 depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a third geographic location, according to some embodiments of the present invention.
  • the user interface screen shown in FIG. 10 is used to present a location-specific digital story 690 associated with the same theme (i.e. the fight for rights) at a third experience location (i.e. the Cortland location “3” 533 in FIG. 6 ).
  • the location-specific digital story 690 is presented in Spanish, since the user profile indicates that this user's preferred language is Spanish.
  • the user positions the camera lens 304 of their smart phone 300 (see FIG. 3 ) towards the sign 692 the English words on the sign can be recognized.
  • the message on the sign can be read to the user in their preferred (e.g. native) language, using speaker 344 in smart phone 300 (see FIG. 3 ).
  • a virtual reality sign in the user's native language can be displayed on the color display 332 of the user's smart phone 300 .
  • the user profile is updated based on the user's digital story experience.
  • the user profile can be updated to indicate that the user has been presented with the first digital story of a particular theme (e.g. the fight for rights theme) at the first location (e.g. the Seneca Falls experience location “1” 531 in FIG. 6 ).
  • the user is asked to respond to questions as a digital story is presented (for example, whether they would like to hear more about particular topics), and their answers are used to update their stored user profile (for example, to indicate their interest in the particular topics).
  • the approximate period of time that the user spends in the location where they have experienced the digital story is recorded, to indicate the user's level of interest in the story location.
  • This information can be used to help select future location-specific digital stories, for example by providing shorter or longer versions of a digital story based on whether the user spent a relatively long time, or a relatively short time, experiencing the current digital story or participating in other activities at the location.
  • a suggested next digital story experience location is provided to the user's mobile device, such as smart phone 300 in FIG. 2 .
  • the processor 292 in the computer system 286 at the service provider 280 determines the next suggested experience location, based on the user profile which has been updated in step 440 .
  • the suggested next experience location (after the user has been presented with the digital story at Seneca Falls experience location “1” 531 ) could be the Skaneateles experience location “2” 532 or the Cortland experience location “3” 533 .
  • the user of the smart phone 300 can reject the next suggested experience location.
  • the service provider 280 can determine and suggest an alternative next experience location.
  • the user can be presented with the option of being presented with a digital story on a different theme at the current experience location, or at a nearby experience location.
  • the user could be presented with the option of visiting a nearby experience location associated with the Seneca Wine Trail.
  • directions are provided in order to direct the user to the suggested next experience location where a digital story can be provided.
  • the map shown in FIG. 6 can be used to provide directions to the suggested next experience location.
  • the directions are provided by the character chosen by the user (e.g. Amelia) who can describe, in a historic context, the directions to the suggested next experience location.
  • the account manager 284 and the customer database 288 in the computer system 286 are used to determine user specific information related to the history of the user's interactions with the system, as well as any previously captured or determined information about the user's experience. For example, in a “vacation trip” scenario, the user may be known to be traveling from a starting location (e.g. their home town) to a particular vacation destination. Further, it may be known that the user has already visited several digital story experience locations and is interested in following a route that will take the user closer to their vacation destination.
  • the suggested next experience location can be made based on responses or answers the user conveyed to questions provided by smart phone 300 (e.g., during a digital story). For example, the user can respond to questions about whether they are interested in a next experience related to a different theme, or whether they are interested in visiting specific areas, or are interested in obtaining a meal or lodging in a specific area, or at a specific restaurant or hotel. In some embodiments, the suggested next experience location can be stored and recalled at a later date.
  • the suggested next experience location can be based on ambient conditions, such as the current weather, the time of day, or safety related ambient condition information.
  • ambient condition information (such as whether it is a rainy day) is used to automatically suggest an indoor location from the set of possible next locations.
  • the time of day can be used, in combination with the operating hours of some experience locations, to avoid suggesting locations that may be closed, or otherwise inaccessible, at the time the user is likely to arrive at the location.
  • the suggested experience location can be based on avoiding a severe weather storm in the area, or avoiding any fire, crime, or other safety related incident which may have occurred in the vicinity of one or more experience locations.
  • the user is asked to capture one or more images of themselves, or their group, during present story step 435 .
  • the processor 292 in the computer system 286 at the service provider 280 determines the next possible image capture location based on the result of analyzing the pixel data of one or more of these user captured images. For example, the captured images can be analyzed to determine whether there are any children depicted in the captured digital image.
  • the service provider 280 provides a downloadable software application (“app”) over the communication network 250 to the smart phone 300 , in order to provide the location-based digital stories.
  • the smart phone 300 is one example of a mobile device that includes a memory, such as image/data memory 330 , which serves as a memory for storing digital story content, output devices including a color display 332 and a speaker 344 for outputting digital story content, a GPS receiver 360 which serves as a location determination unit, a digital processor 320 which serves as a data processing system, and a firmware memory 328 which serves as a program memory.
  • the digital processor 320 is communicatively connected to the image/data memory 330 , the color display 332 , the speaker 344 via the audio codec 340 , and the firmware memory 328 .
  • the instructions provided in the app can control the digital processor 320 in order to store data for providing a plurality of location-specific digital stories related to a common theme at a plurality of locations in the image/data memory 330 .
  • the instructions provided in the app can also be used by the digital processor 320 to determine if the current location of the mobile phone 300 , provided by GPS receiver 360 , corresponds to one of the plurality of locations for the location-specific digital stories.
  • the instructions provided in the app can also be used by the digital processor 320 to determine if the user of the mobile device has already viewed one of the plurality of location-specific digital stories at a different one of the plurality of locations.
  • the instructions provided in the app can also be used by the digital processor 320 to read digital story content data for a first digital story from the image/data memory 330 and to provide the first digital story content data to one or more of the output devices in the smart phone 300 , such as color display 332 and speaker 344 , if it is determined that the smart phone 300 has not been used to present one of the plurality of location-specific digital stories at a different one of the plurality of locations.
  • the instructions provided in the app can also be used by the digital processor 320 to read digital story content data for a second digital story from image/data memory 330 and to provide the second digital story content data to one or more of the output devices in the smart phone 300 , if it is determined that the smart phone 300 has been used to present one of the plurality of location-specific digital stories at a different one of the plurality of locations.
  • FIG. 11 depicts a photo postcard product which includes a user-captured image and pre-stored information, according to some embodiments of the present invention.
  • FIG. 11 depicts a photo postcard 700 , which is one type of printed photo product which can be provided by fulfillment provider 270 .
  • the photo postcard 700 includes a title section 705 which includes a historical character image 710 and the title “Pathways Through History”. Title section 705 is provided using pre-stored images and other information stored in content database 290 at the service provider 280 in FIG. 2 .
  • the photo postcard 700 also includes a main image 715 , which was captured by the user during the digital story presentation, as a result of instructions given to the user during the digital story presentation.
  • the main image 715 depicts a modified user character 720 , which includes the user's head but which has been clothed in period clothing using augmented reality techniques.
  • the main image 715 also depicts historical figures 722 from the digital story presentation, who have been added to the main image 715 using augmented reality techniques.
  • the photo postcard 700 also includes an unmodified user photo 730 , along with customized text 735 “Andrea's Fight for Rights”.
  • Customized text 735 provides the user's name, which has been automatically added by using the name or nickname in the user profile.
  • the back view 700 B of the photo postcard 700 includes a postage section 740 .
  • a postal stamp is affixed to the postage section 740 .
  • a custom stamp providing an image associated with the theme of the digital story experience is printed in the postage section 740 , as part of a customized postage stamp.
  • the back view 700 B also includes an address section 745 , which provides the mailing address of the recipient. In some embodiments, the address section 745 is automatically filled out when the user selects a recipient from the address book stored in their smart phone 300 , or from a list of friends addresses stored as part of their user profile.
  • the back view 700 B of the photo postcard 700 also includes a message section 750 .
  • the text message in the message section 750 is automatically derived from pre-stored information and user responses provided by the user during the digital story presentation.
  • a portion of the text message in the message section 750 is provided by speech to text conversion of the user's spoken comments which have been converted to digital audio signals by mic 342 and audio codec 340 in smart phone 300 , and converted to text by digital processor 320 in smart phone 300 or by processor 292 after the digital audio signals have been uploaded to service provider 280 .
  • one or more user captured images can be modified and composited with pre-stored information.
  • the processor 292 in the computer system 286 can process a user captured image in order to crop out a face of a person depicted in the image, convert the face from a color to a monochrome image, and composite the image of the face into one of a plurality of pre-stored newspaper templates, so that the user captured image appears to be a photograph in a historic newspaper related to the theme of the digital story.
  • the newspaper text can be modified based on text entered by the user of the smart phone 300 . For example, the headline of the newspaper can read “Andrea and Declan fight for rights”
  • the service provider 280 provides advertisements or coupons specific to the digital story over the communication network 250 to the smart phone 300 .
  • one or more user captured images can be modified and composited with pre-stored information in order to create the advertisements or coupons.
  • a particular advertisement is selected from a plurality of possible advertisements based on various criteria.
  • the criteria can include, for example, demographic information such as the approximate age of the user, as stored in the user profile, or the approximate age of one or more of the persons depicted in the captured digital image. For example, if the captured digital image includes one or more children, the particular advertisement can be for an age-appropriate book or toy related to the theme of the digital story.
  • the criteria can also include travel route related information, so that the advertisements relate to businesses the user is likely to pass on their trip to the next experience location, or to their vacation destination.
  • the criteria can also include weather related information such as the current temperature. For example, on warm days the advertisement can provide an offer related to a discount on an ice cream cone at a first nearby merchant along the travel route, and on cold days the advertisement can provide an offer related to a discount on a hot drink at second nearby merchant.
  • the coupons can be for a limited time period, based on the date and time ambient condition information.
  • the coupons can customized so that they can only be used by the particular user of the smart phone 300 . This can be done, for example, by including one of the digital images captured by the user, as part of the coupon.
  • the processor 292 analyzes metadata associated with the user captured digital images, to determine whether the analyzed images were captured within predetermined areas associated with particular location-specific digital stories.
  • the processor can analyze the pixel data of the user captured digital images to determine if the images also include a particular object (e.g. a certain building, or a certain type of signpost).
  • a particular object e.g. a certain building, or a certain type of signpost.
  • the processor 282 performs additional analysis on the pixel data of the user captured images, in order to determine the quality of the images. For example, a number of user captured images can be evaluated to select a subset of images which contain the best composition or pose (e.g. the best looking smile), or which provide the best exposed or focused images.
  • the pre-stored information can include images, graphics, text, or templates. If the photo product to be produced is a digital photo product, such as a video slide show or digital video clip, the pre-stored information can include audio information such as voice narration tracks or music tracks, or video information such as video clips describing the historic site, or video special effects templates or tracks.
  • the user of mobile phone 300 can be the driver or passenger in a vehicle which is driving along a route.
  • the route can be, for example, a scenic or historic route, such as historic Route 66 in California, the scenic Seaway Trail along Lake Ontario in upstate New York, or Routes 5 and 20 along the Finger Lakes in upstate New York.
  • Vehicles can begin their journeys at various points along the route, and can drive in at least two alternate directions (e.g. west to east, or east to west). Therefore, system 214 is designed to provide digital stories which are appropriate for the user's route, no matter where they begin along the route, or which direction they follow.
  • FIG. 12 is a flow diagram depicting steps for providing travel direction-dependent digital stories at a plurality of different locations, according to some embodiments of the present invention.
  • the steps are performed by the service provider 280 in FIG. 2 .
  • some or all of the steps are performed by smart phone 300 in FIG. 3 .
  • data for a plurality of direction-dependent and location-specific digital stories are stored on a network-accessible storage device system, such as content database 290 in FIG. 2 .
  • the digital stories are stored in association with GPS information, such as geofences, which indicate the locations where the digital stories are to be presented as well as one or more travel directions associated with particular digital stories.
  • GPS information such as geofences
  • At least some of the digital stories provide messages that are travel direction-dependent and need to be presented at the appropriate time, and with the appropriate content, for the travel direction.
  • the message “coming up on your right is Naked Dove Brewing Company, the perfect place to learn about beer” is appropriate for east to west travelers, if it presented starting about 1 ⁇ 4 mile east of the Naked Dove Brewing Company location.
  • this message is not suitable, and a different message (e.g. “coming up on your left is Naked Dove Brewing Company, the perfect place to learn about beer” needs to be presented starting at a location which is about 1 ⁇ 4 mile west of the Naked Dove Brewing Company location.
  • direction-dependent, location-specific digital stories are stored for a plurality of different themes or categories in the content database 290 .
  • the plurality of themes or categories for users in vehicles driving along Route 5 and 20 could include themes related to the “Fight for Rights” and “Wineries in the Finger Lake region” and categories such as “Best places to eat” and “Fun stops for kids”.
  • the location-dependent digital stories are stored in a memory of smart phone 300 , such as image/data memory 330 or firmware memory 328 , by downloading an app from the service provider 280 . The app can then be selected by the user of smart phone 300 , and the instructions provided by the app can be executed by digital processor 320 , in order to perform the steps depicted in FIG. 12 .
  • preferences are developed for a plurality of users of mobile devices, such as the users of smart phones 300 A, 300 B in FIG. 2 , and stored in customer database 288 .
  • the preferences can indicate an interest in specific topics, such as history or art, or in specific types of visitor attractions, such as antique shops, wineries or microbreweries.
  • user profiles are also stored, as described earlier in reference to step 405 of FIG. 4 .
  • the user preferences indicate whether the user has already been presented with one or more digital stories. If a user has been presented with a digital story, the theme, location, and date/time can be recorded in their user profile.
  • the user can also indicate a current need, such as the need to locate a relatively nearby restaurant, gas station, or rest room.
  • the current location of the mobile device for a particular user is determined, as was described earlier in reference to step 410 of FIG. 4 . This can be done, for example, by using the GPS receiver 360 in the smart phone 300 (see FIG. 3 ) to determine the GPS coordinates of the smart phone 300 , and by using the digital processor 320 in the smart phone to communicate the GPS coordinates to the service provider 280 using the wireless modem 350 .
  • an experience location test 415 a determination is made as to whether the user's current location corresponds to one of the plurality of locations at which location-based digital stories can be provided by system 214 (yes to test 415 ) or is outside this plurality of experience locations (no to test 415 ), as described earlier in reference to test 415 in FIG. 4 . This can be tested by determining, for example, if the user's smart phone 300 has entered into the geofence for any of the experience locations.
  • FIG. 13 is an example of a map 800 depicting a plurality of different experience locations and travel directions at which travel direction-dependent digital stories can be provided.
  • four different geofences, 810 , 820 , 830 , and 840 are positioned at different locations in the Canandaigua, N.Y. area.
  • Geofence 810 is located in the central downtown area, and is associated with a digital story related to the history of Ontario County, which does not depend on the direction of travel.
  • the other three geofences 820 , 830 , and 840 are associated with direction-dependent digital stories.
  • geofence 840 is associated with different digital stories that correspond to southern travel direction 842 , western travel direction 844 , and eastern travel direction 846 .
  • One particular digital story associated with southern travel direction 842 can relate, for example, to the CMAC performing arts center which the vehicle will soon approach on its left side, or with the Canandaigua Country Club which the vehicle will pass on its right side.
  • One particular digital story associated with western travel direction 844 can relate, for example, to the history of Canandaigua Lake, which the vehicle will soon approach on its left side.
  • One particular digital story associated with eastern travel direction 846 can relate, for example, to festivals or other events which are taking place in Geneva, N.Y., which is the next major community along Route 5 and 20 to the East.
  • the digital stories describing these festivals and other events are stored in association with information defining the particular time period (e.g. dates and times) of these events, and is only presented during the particular time period that the events are taking place.
  • the digital stories describing these events, and the information defining the particular time period of these events is managed and updated by content curators responsible for the events, using computer 218 in FIG. 3 , so that up-to-date information for the events is stored in content database 290 .
  • Geofence 820 is associated with different digital stories that correspond to northern travel direction 822 , soiled travel direction 826 , and eastern travel direction 824 .
  • One particular digital story associated with northern travel direction 822 can relate, for example, to a business which matches the user's preferences stored in the user profile, as described earlier in relation to step 855 . For example, if the user has indicated an interest in antiques, the digital story presented to this user can relate to a particular antique shop located along Main Street in downtown Canandaigua. If the vehicle is following southwestern travel direction 826 and the user has indicated a current need for a gas station, the digital story can relate to one or more nearby gas stations that the user will pass while traveling west along Route 5 and 20.
  • One particular digital story associated with eastern travel direction 824 can relate, for example, to the history of Canandaigua Lake, which the vehicle is passing on the right.
  • the particular digital story associated with eastern travel direction 824 can be similar to the digital story associated with western travel direction 844 , described earlier in relation to geofence 840 .
  • Geofence 830 is associated with different digital stories that correspond to northeastern travel direction 832 , soiled travel direction 836 , and southern travel direction 834 .
  • the particular digital stories associated with northeastern travel direction 832 can include, for example, two or more different digital stories related to the same venue which is located along the current travel route.
  • the stored direction-dependent digital stories for the New York Wine and Culinary Center can include a first story related to the wine tastings offered at the Center, a second story related to the restaurant located at the Center, and a third story related to the rest room facilities located at the Center.
  • One or more of these three digital stories for the New York Wine and Culinary Center can be presented to users in vehicles headed in the northeastern travel direction 832 , depending on the user's preferences and needs, as described earlier in relation to step 855 .
  • One of the digital stories associated with southwestern travel direction 836 can be, for example, a general story describing the history of Route 5 and 20, which started as foot trails established by Native Americans thousands of years ago, and later became part of the transcontinental federal highway between Boston and Newport, Oreg.
  • this particular digital story can be associated with many different geofences along Route 5 and 20, and can be presented only once, when the user's vehicle first enters one of the geofences associated with this general story. This permits the user to learn about the general history of Route 5 and 20 soon after they begin their journey, but during a time when there are no other attractions or traffic stops to interrupt the story.
  • the general story will not be repeated when the user's vehicle enters the other geofences associated with the same general story during the same trip.
  • the digital stories associated with travel direction 834 can include, for example, two different digital stories related to the 1837 Cobblestone Cottage Bed and Breakfast.
  • the first digital story can describe the general history of the 1837 Cobblestone Cottage
  • the second digital story can describe specific accommodations, such as a room type and room rate.
  • the second digital story is presented if the user has indicated a user preference for bed and breakfast types of accommodations and if the 1837 Cobblestone Cottage Bed and Breakfast currently has a vacant guest room. If not, the first digital story is presented.
  • this vacancy information is updated by a content curator responsible for the 1837 Cobblestone Cottage Bed and Breakfast venue, using computer 218 , so that up-to-date information is stored in content database 290 .
  • a digital story could be presented only if the travel direction is determined to be a particular travel direction (e.g. Northern travel direction 822 ), otherwise a digital story would not be presented. It will also be understood that in some embodiments, at some experience locations a particular digital story could be presented only if it was determined that the travel direction at a specified location prior to entering the experience location was determined to be a particular direction.
  • a particular travel direction e.g. Northern travel direction 822
  • the process proceeds to provide general content step 890 .
  • the content is music (e.g. mp3 files) previously stored by the user on their smart phone 300 , or provided by a music streaming service such as PandoraTM Internet Radio.
  • the music is muted or paused when digital stories are presented in present story step 435 , and automatically resumed when the digital story presentation is completed.
  • the general content can include a digital map showing the vehicle's current location.
  • standard mapping programs such as Google Maps, already installed on the smart phone 300 can be used to provide general map content.
  • the process proceeds to determine travel direction step 860 .
  • the travel direction is determined by comparing recent GPS readings from GPS receiver 360 in smart phone 300 , in order to determine, for example, whether the travel direction for a vehicle which has entered geofence 820 in FIG. 13 is northern travel direction 822 , southwestern travel direction 826 , or eastern travel direction 824 .
  • the travel direction is determined, at least in part, by determining which other geofences the vehicle has recently exited. For example, if the vehicle has recently exited geofence 830 , the current travel direction cannot be southwestern travel direction 826 .
  • FIG. 14 is an example of a map 880 depicting a plurality of adjacent geofences 870 , 872 , and 874 which can be used to determine a travel direction. For example, if the current location of the mobile device 300 is within the geofence 870 associated with a location-specific digital story, the travel direction can be determined to be an eastern travel direction 876 if the mobile device has recently been located within geofence 874 , which is just west of geofence 870 .
  • the travel direction can be determined to be a western travel direction 878 if the mobile device has very recently been located within geofence 872 , which is just east of geofence 870 .
  • the travel direction can be determined to be a western travel direction 878 , since the only possible travel directions are eastern travel direction 876 and western travel direction 878 .
  • one of a plurality of possible digital stories is selected based on the travel direction.
  • the digital story can be selected responsive to stored user preferences and current user needs.
  • the selection is performed using the processor 292 at the service provider 280 , and the selected digital story is transferred to the smart phone 300 over the communication network 250 .
  • the selection is performed using the digital processor 320 in smart phone 300 , which selects one of the location-dependent digital stories previously stored in image/data memory 330 or firmware memory 328 .
  • the location-dependent digital stories are stored in image/data memory 330 when an app is downloaded from the service provider 280 to the firmware memory 328 , as described earlier in reference to store direction-dependent stories step 850 .
  • the direction-dependent digital story selected in step 865 is presented to the user.
  • the digital story can be presented to the user using a variety of story-telling methods, such as audio stories, text-based stories, video stories, and augmented-reality stories.
  • the user selects a preferred story-telling method from a menu offering a variety of choices, in store user preferences step 855 . For example, if the driver of the vehicle is interested in the digital story, the digital story can be presented using audio narration, sound effects, and music.
  • the digital story can be presented using text and images, which can be read and viewed by the passenger without disturbing the driver of the vehicle.
  • the smart phone 300 has been presenting general content, such as music, as described earlier in relation to provide general content step 890 , the general content is stopped, muted or paused while the story is presented in present story step 435 , and then automatically resumed after the story is presented.
  • the user profile is updated based on the digital story presented to the user, as described earlier in relation to update user profile step 440 of FIG. 4 .
  • the user profile can be updated to indicate that the user has been presented with a specific direction-dependent digital story.
  • the system 214 determines if the user stopped at the venue, and if the user did stop, the system 214 determines the approximate period of time that the user spent at the venue.
  • the GPS receiver 360 in the mobile phone 300 is used in determining if the user stopped at the venue, and the approximate time period of the stop. This can be done, for example, by determining the time period during which the GPS location of the mobile phone 300 was approximately equal to the GPS location of the venue.
  • a computer program product can include one or more non-transitory storage mediums, for example; magnetic storage media such as magnetic disk (such as a floppy disk) or magnetic tape; optical storage media such as optical disk, optical tape, or machine readable bar code; solid-state electronic storage devices such as random access memory (RAM), flash EPROM memory, or read-only memory (ROM); or any other physical device or media employed to store a computer program having instructions for controlling one or more computers to practice any of the methods according to any embodiment of the present invention.
  • magnetic storage media such as magnetic disk (such as a floppy disk) or magnetic tape
  • optical storage media such as optical disk, optical tape, or machine readable bar code
  • solid-state electronic storage devices such as random access memory (RAM), flash EPROM memory, or read-only memory (ROM); or any other physical device or media employed to store a computer program having instructions for controlling one or more computers to practice any of the methods according to any embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Disclosed herein are, among other things, systems and methods for providing location-based digital stories to a user of processing a device system, such as a mobile device. In some embodiments, a user profile associated with the user and data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations may be stored. A processing device system may be configured to determine a current location of the mobile device and to provide a first or second digital story to the mobile device based on an analysis of the stored user profile and the current location.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims priority to U.S. Provisional Application No. 61/804,608, filed Mar. 22, 2013, the entire disclosure of which is hereby incorporated herein by reference.

  • TECHNICAL FIELD
  • Some embodiments of the present invention relate to personalized travel experiences. For example, some embodiments of the present invention relate to mobile devices and systems, as well as related methods, for providing digital story experiences related to a common theme at different geographic locations.

  • BACKGROUND
  • Smart phones, tablet computers, and other portable devices incorporating wireless connections to the Internet have opened up opportunities for new, entertaining tourism experiences. These devices are currently used to provide location-aware travel guides to various cities and historical sites. For example, various smart phone apps provide a guide to restaurants, bars, and nightlife in cities such as Boston and New York. Some of these apps use the smart phone's built-in GPS to provide various maps and lists of venues in order of distance from the user's current location.

  • As another example, Fodor'S™ City Apps provides iPhone™ and Android™ apps for a number of major cities, including New York City. The Fodor's apps provide recommendations for sightseeing, restaurants and hotels. Each Fodor's app permits the user to book hotels, restaurants, and entertainment in the particular city, using Expedia™, OpenTable™, and TicketsNow™. It also permits the user to bookmark and create comments about their favorite attractions. The user can download an interactive offline map and reviews, so that the user can browse the map, read reviews, and make notes when in the subway or other areas with poor wireless reception.

  • It is known to provide preference-aware location-based services, as described in the paper titled “Toward context and preference-aware location-based services” authored by Mokbel, et al published in MobiDE'09, Jun. 29, 2009, Providence, R.I., USA. Such systems tailor their services based on the preference and context of each customer. For example, in a restaurant finder application, the system can use the dietary restrictions, price range, other user ratings, current traffic, and current waiting time to recommend nearby restaurants to the customer, rather than recommending all of the closest restaurants.

  • Photography is often used to record and share experiences, such as vacation trips, family outings, or seasonal events. Still and video images of such experiences can be captured using image capture devices including camera phones (such as smart phones), digital still cameras, and camcorders. The digital images captured by these image capture devices can be shared by e-mail and uploaded to web sites such as Facebook™ and Flickr™, where they can be viewed by friends. The uploaded images can be printed using on-line photo service providers, such as Shutterfly™. Users can order photo products, such as photo books and collages, which utilize uploaded digital images.

  • It is known to produce enhanced photo products by combining images captured with connected image capture devices, such as smart phones, and professionally produced digital content related to the area where the photographs were captured, as described in U.S. Pat. No. 8,405,740 titled “Guidance for image capture at different locations”, issued to Nichols, et al.

  • It is known to use a “geofence” to create a virtual perimeter for a real-world geographic area, such as a boundary around a store, school, or other area of interest. When the location-aware device (such as a smart phone) of a location-based service (LBS) user enters or exits a geofence, the device can generate a notification. The notification can be sent to an email account or another smart phone. For example, a parent can be notified when a child leaves an area defined by a geofence.

  • It is known to utilize augmented reality in apps running on smart phones. For example, the Aurasma™ augmented reality platform developed by Hewlett Packard (“HP”)™, Palo Alto, Calif. can enable a smart phone to recognize real world images. The real world images can be overlaid with animations, videos, and 3D models to provide augmented reality experiences.

  • Another known prior art system is “Locast”, developed by the MIT media Lab. According to their website, Locast can be used to create interactive narratives that are crafted by linking together videos and photos thematically, geographically, and chronologically. These stories can be explored by viewers in a non-linear fashion. This MIT group has developed the Open Locast Web Application, which includes a map-based front-end built upon OpenLayers and the Google™ Maps API, that provides an interface for browsing, searching, and interacting with media content. This group has also developed the Open Locast Android Application, which provides interactive content recording/creation, browsing and searching. It supports content synchronization for offline content capturing, viewing and browsing, allowing for use in locations with limited or no connectivity.

  • However, there is a need in the art for improvements in the above-discussed technologies.

  • SUMMARY
  • At least the above-discussed need is addressed and technical solutions are achieved by various embodiments of the present invention. In some embodiments a method executed by a data processing device system includes the steps of: storing, in a processor-accessible memory device system communicatively connected to the data processing device system, a user profile associated with a user; storing, in the processor-accessible memory device system, data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations; determining whether or not a current location of a mobile device associated with the user corresponds to one of the plurality of locations related to the plurality of location-specific digital stories; determining, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, and based at least on an analysis of the user profile, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories at a different one of the plurality of locations different than the first one; providing a first digital story to the mobile device in response to it being determined that the first case exists; and providing a second digital story to the mobile device in response to it being determined that the second case exists.

  • The first digital story may introduce the common theme, and the second digital story may continue the common theme, which was introduced at the different one of the plurality of locations.

  • User profiles for a plurality of users, and the data, may be stored by a network-accessible storage system.

  • In some embodiments, the method may include the step of providing general content to the mobile device if it is determined that the current location of the mobile device does not correspond to one of the plurality of locations related to the plurality of location-specific digital stories.

  • In some embodiments, at least some of the plurality of location-specific digital stories are associated with particular travel directions, and the method may include determining a travel direction of the mobile device and providing the first digital story or the second digital story in response to the determined travel direction. The user profile may include at least one user preference, and the method may include selecting the first digital story from a plurality of stored first digital stories in response to the stored user preference. The plurality of stored first digital stories may be related to the first one of the plurality of locations. The user preference may include a language preference, and the method may include accessing the stored user profile to determine the language preference and selecting the first digital story from the plurality of stored first digital stories in response to the determining of the language preference. In some embodiments, the user preference includes a demographic group of the user, and the method includes accessing the stored user profile to determine the language preference and selecting the first digital story from the plurality of stored first digital stories in response to the determining of the demographic group.

  • In some embodiments the first digital story, the second digital story, or both is or are configured to instruct the user to capture one or more digital images, and the method includes requesting a photo product related to the common theme. The photo product may be defined to incorporate at least one digital image captured by the user.

  • In some embodiments, the method includes suggesting a location for a next digital story in response to answers provided by the user during the first digital story or the second digital story.

  • According to some embodiments, a mobile device may include a memory device system story content data; an output device system; a location determination unit configured to determine a geographic location of the mobile device; and a data processing device system communicatively connected to the output device system, the memory device system, and the location determination unit. The memory device system may store program instructions configured to cause the data processing system at least to: store, in the memory device system, data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations; determine whether or not a current location of the mobile device, which is provided by the location determination unit, corresponds to one of the plurality of locations related to the plurality of location-specific digital stories; determine, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories at a different one of the plurality of locations different than the first one; acquire, from the memory device system, first digital story content data of the digital story content data and provide the first digital story content data to the output device system in response to it being determined that the first case exists; and acquire, from the memory device system, second digital story content data of the digital story content data and provide the second digital story content data to the output device system in response to it being determined that the second case exists.

  • The output device system may include an image display, a speaker, an audio output jack, or a combination thereof. The first digital story content data, the second digital story content data, or both may include audio content data, and the program instructions may be configured to cause the data processing device system at least to provide music content data to the output device system when the current location of the mobile device does not correspond to one of the plurality of locations. The first digital story content data, the second digital story content data, or both may be configured to instruct the user to capture one or more digital images using the mobile device.

  • In some embodiments, at least some of the plurality of location-specific digital stories are associated with particular travel directions, and the program instructions are configured to cause the data processing device system at least to: determine, based at least on input from the location determination unit, a travel direction of the mobile device; and provide the first digital story content data or the second digital content data in response to the determined travel direction.

  • In some embodiments, a system includes a memory device system storing a user profile associated with a user of a mobile device; a network-accessible storage device system storing data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations; a location determination unit configured to determine a geographic location of the mobile device; and a data processing device system. The data processing device system may be configured at least to: determine whether or not a current location of the mobile device, which is provided by the location determination unit, corresponds to one of the plurality of locations related to the plurality of location-specific digital stories; determine, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, and based at least on an analysis of the user profile, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories by the mobile device at a different one of the plurality of locations different than the first one; provide a first digital story of the plurality of location-specific digital stories stored by the network-accessible storage device system to the mobile device in response to it being determined that the first case exists; and provide a second digital story of the plurality of location-specific digital stories stored by the network-accessible storage device system to the mobile device in response to it being determined that the second case exists.

  • In some embodiments, the data processing device system is configured to automatically determine whether or not to provide the first digital story using audio data, based on measurements performed by the mobile device.

  • The memory device system, which stores the user profile, may also store user profiles for a plurality of users and may be at least part of the network-accessible storage device system.

  • The first digital story, the second digital story, or both, (a) may include an augmented reality image of a historical character, and (b) may be configured to cause the mobile device to display the augmented reality image of the historical character along with an image captured by the mobile device.

  • In some embodiments, at least some of the plurality of location-specific digital stories are associated with particular travel directions, and the data processing device system is configured at least to: determine a travel direction of the mobile device; and provide the first digital story or the second digital story in response to the determined travel direction.

  • Any of the features of any of the methods discussed herein may be combined with any of the other features of any of the methods discussed in herein. In addition, a computer program product may be provided that comprises program code portions for performing some or all of any of the methods and associated features thereof described herein, when the computer program product is executed by a computer or other computing device or device system. Such a computer program product may be stored on one or more non-transitory computer-readable storage mediums.

  • In some embodiments, each of any or all of the computer-readable data storage medium systems described herein is a non-transitory computer-readable data storage medium system including one or more non-transitory computer-readable storage mediums storing one or more programs or program products which configure a data processing device system to execute some or all of one or more of the methods described herein.

  • Further, any or all of the methods and associated features thereof discussed herein may be implemented as all or part of a device system or apparatus.

  • Various systems may include combinations or subsets of all the systems and associated features thereof described herein.

  • BRIEF DESCRIPTION OF THE DRAWINGS
  • It is to be understood that the attached drawings are for purposes of illustrating aspects of various embodiments and may include elements that are not to scale.

  • FIG. 1

    illustrates a system configured to generate personalized travel experiences, according to some embodiments of the present invention;

  • FIG. 2

    is a block diagram of a particular implementation of the system of

    FIG. 1

    in accordance with some embodiments of the present invention;

  • FIG. 3

    is a block diagram of a smart phone, which may be all or part of the system of

    FIG. 1

    , according to some embodiments of the present invention;

  • FIG. 4

    is a flow diagram depicting steps for providing location-specific digital stories related to a common theme at a plurality of different locations, according to some embodiments of the present invention;

  • FIG. 5

    is a flow diagram depicting a particular implementation of

    step

    425 in

    FIG. 4

    pertaining to selecting a location specific story, according to some embodiments of the present invention;

  • FIG. 6

    is an example of a map depicting different locations at which location-specific digital story experiences related to a common theme can be provided, according to some embodiments of the present invention;

  • FIG. 7A

    depicts an example of a user interface screen for selecting a theme for location-specific stories, according to some embodiments of the present invention;

  • FIG. 7B

    depicts a user interface screen which begins to introduce a theme and character of a story, according to some embodiments of the present invention;

  • FIG. 8A

    depicts a user interface screen for presenting a first location-specific digital story associated with a first theme at a first geographic location, according to some embodiments of the present invention;

  • FIG. 8B

    depicts a user interface screen for presenting a second location-specific digital story associated with the first theme at the first geographic location, according to some embodiments of the present invention;

  • FIG. 9

    depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a second geographic location, according to some embodiments of the present invention;

  • FIG. 10

    depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a third geographic location, according to some embodiments of the present invention;

  • FIG. 11

    depicts a photo postcard product which includes a user-captured image and pre-stored information, according to some embodiments of the present invention;

  • FIG. 12

    is a flow diagram depicting steps for providing travel direction-dependent digital stories at a plurality of different locations, according to some embodiments of the present invention;

  • FIG. 13

    is an example of a map depicting different locations and travel directions at which travel direction-dependent digital stories can be provided, according to some embodiments of the present invention; and

  • FIG. 14

    is an example of a map depicting a plurality of adjacent geofences which can be used to determine a travel direction, according to some embodiments of the present invention.

  • DETAILED DESCRIPTION
  • In the following description, some embodiments of the present invention are described in terms that may be implemented at least in part as one or more software programs configured to be executed by a data processing device system. Some or all of such software programs may be equivalently constructed in hardware. Software and hardware not specifically shown, suggested, or described herein that is useful for implementation of any of various embodiments of the present invention are conventional and within the ordinary skill of the art.

  • In this regard, in the descriptions herein, certain specific details are set forth in order to provide a thorough understanding of various embodiments of the invention. However, one skilled in the art will understand that the invention may be practiced at a more general level without these details. In other instances, well-known structures have not been shown or described in detail to avoid unnecessarily obscuring descriptions of various embodiments of the invention.

  • Reference throughout this specification to “one embodiment” or “an embodiment” or “an example embodiment” or “an illustrated embodiment” or “a particular embodiment” and the like means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “in an example embodiment” or “in this illustrated embodiment” or “in this particular embodiment” and the like in various places throughout this specification are not necessarily all referring to one embodiment or a same embodiment. Furthermore, the particular features, structures or characteristics of different embodiments may be combined in any suitable manner to form one or more other embodiments.

  • Unless otherwise explicitly noted or required by context, the word “or” is used in this disclosure in a non-exclusive sense. In addition, unless otherwise explicitly noted or required by context, the word “set” is intended to mean one or more, and the word “subset” is intended to mean a set having the same or fewer elements of those present in the subset's parent or superset.

  • Further, the phrase “at least” is used herein at times merely to emphasize the possibility that other elements may exist besides those explicitly listed. However, unless otherwise explicitly noted (such as by the use of the term “only”) or required by context, non-usage herein of the phrase “at least” nonetheless includes the possibility that other elements may exist besides those explicitly listed. For example, the phrase, ‘based at least upon A’ includes A as well as the possibility of one or more other additional elements besides A. In the same manner, the phrase, ‘based upon A’ includes A, as well as the possibility of one or more other additional elements besides A. However, the phrase, ‘based only upon A’ includes only A. Similarly, the phrase ‘configured at least to A’ includes a configuration to perform A, as well as the possibility of one or more other additional actions besides A. In the same manner, the phrase ‘configured to A’ includes a configuration to perform A, as well as the possibility of one or more other additional actions besides A. However, the phrase, ‘configured only to A’ means a configuration to perform only A.

  • The term “program” in this disclosure should be interpreted as a set of instructions or modules that may be executed by one or more components in a system, such as a controller system or data processing device system, in order to cause the system to perform one or more operations. The set of instructions or modules may be stored by any kind of memory device, such as those described subsequently with respect to

    FIG. 1

    ,

    FIG. 2

    , and

    FIG. 3

    . In addition, this disclosure may describe or similarly describe that the instructions or modules of a program are configured to cause the performance of an action. The phrase “configured to” in this context is intended to include at least (a) instructions or modules that are presently in a form executable by one or more data processing devices to cause performance of the action (e.g., in the case where the instructions or modules are in a compiled and unencrypted form ready for execution), and (b) instructions or modules that are presently in a form not executable by the one or more data processing devices, but could be translated into the form executable by the one or more data processing devices to cause performance of the action (e.g., in the case where the instructions or modules are encrypted in a non-executable manner, but through performance of a decryption process, would be translated into a form ready for execution). The word “module” may be defined as a set of instructions.

  • The word “device” and the phrase “device system” both are intended to include one or more physical devices or sub-devices (e.g., pieces of equipment) that interact to perform one or more functions, regardless of whether such devices or sub-devices are located within a same housing or different housings. In this regard, the word “device”, may equivalently be referred to as a “device system”.

  • Further, the phrase “in response to” may be used in this disclosure. For example, this phrase might be used in the following context, where an event A occurs in response to the occurrence of an event B. In this regard, such phrase includes, for example, that at least the occurrence of the event B causes or triggers the event A.

  • FIG. 1

    schematically illustrates a personalized travel

    experience generation system

    100, according to some embodiments of the present invention. The

    system

    100 may include a data

    processing device system

    110, a data input-

    output device system

    120, and a processor-accessible

    memory device system

    130. The processor-accessible

    memory device system

    130 and the data input-

    output device system

    120 are communicatively connected to the data

    processing device system

    110.

  • The data

    processing device system

    110 includes one or more data processing devices that implement or execute, in conjunction with other devices, such as those in the

    system

    100, methods of various embodiments of the present invention, including the example methods of

    FIG. 4

    ,

    FIG. 5

    , and

    FIG. 12

    described herein. Each of the phrases “data processing device”, “data processor”, “processor”, and “computer” and the like is intended to include any data processing device, such as a central processing unit (“CPU”), a desktop computer, a laptop computer, a mainframe computer, a tablet computer such as an iPad™, a personal digital assistant, a cellular phone, a mobile device, a smart phone, or any other device for processing data, managing data, or handling data, whether implemented with electrical, magnetic, optical, biological components, or otherwise. In this regard, while some embodiments of the present invention are described herein in the context of one or more mobile devices, such as a smart phone, the invention is not so limited, and any other data processing device system may be used instead of or in addition to a mobile device.

  • The processor-accessible

    memory device system

    130 includes one or more processor-accessible memory devices configured to store program instructions and other information, including the information and program instructions needed by a data processing device system to execute the methods of various embodiments, including the example methods of

    FIG. 4

    ,

    FIG. 5

    , and

    FIG. 12

    described herein. In this regard, each of the steps illustrated in the example methods of

    FIG. 4

    ,

    FIG. 5

    , and

    FIG. 12

    may represent program instructions stored in the processor-accessible

    memory device system

    130 and configured to cause a data processing device system to execute the respective step. The processor-accessible

    memory device system

    130 may be a distributed processor-accessible memory device system including multiple processor-accessible memory devices communicatively connected to the data

    processing device system

    110 via a plurality of computers and/or devices. On the other hand, the processor-accessible

    memory device system

    130 need not be a distributed processor-accessible memory system and, consequently, may include one or more processor-accessible memory devices located within a single data processing device.

  • Each of the phrases “processor-accessible memory”, “processor-accessible memory device”, and the like is intended to include any processor-accessible data storage device, whether volatile or nonvolatile, electronic, magnetic, optical, or otherwise, including but not limited to, registers, floppy disks, hard disks, Compact Discs, DVDs, flash memories, ROMs, EEPROMs, and RAMs. In some embodiments, each of the phrases “processor-accessible memory” and “processor-accessible memory device” is intended to include or be a processor-accessible (or computer-readable) data storage medium. In some embodiments, each of the phrases “processor-accessible memory” and “processor-accessible memory device” is intended to include or be a non-transitory processor-accessible (or computer-readable) data storage medium. In some embodiments, the

    memory device system

    130 may be considered to include or be a non-transitory processor-accessible (or computer-readable) data storage medium system. And, in some embodiments, the

    memory device system

    130 may be considered to include or be a non-transitory processor-accessible (or computer-readable) storage medium system.

  • The phrase “communicatively connected” is intended to include any type of connection, whether wired or wireless, between devices, data processors, or programs in which data may be communicated. Further, the phrase “communicatively connected” is intended to include a connection between devices or programs within a single data processor, a connection between devices or programs located in different data processors, and a connection between devices not located in data processors at all. In this regard, although the processor-accessible

    memory device system

    130 is shown separately from the data

    processing device system

    110 and the data input-

    output device system

    120, one skilled in the art will appreciate that the processor-accessible

    memory device system

    130 may be located completely or partially within the data

    processing device system

    110 or the data input-

    output device system

    120. Further in this regard, although the data input-

    output device system

    120 is shown separately from the data

    processing device system

    110 and the processor-accessible

    memory device system

    130, one skilled in the art will appreciate that such system may be located completely or partially within the

    data processing system

    110 or the processor-accessible

    memory device system

    130, depending upon the contents of the input-

    output device system

    120. Further still, the data

    processing device system

    110, the data input-

    output device system

    120, and the processor-accessible

    memory device system

    130 may be located entirely within the same device or housing or may be separately located, but communicatively connected, among different devices or housings. In the case where the data

    processing device system

    110, the data input-

    output device system

    120, and the processor-accessible

    memory device system

    130 are located within the same device, the

    system

    100 of

    FIG. 1

    may be implemented by a single application-specific integrated circuit (ASIC) in some embodiments.

  • The data input-

    output device system

    120 may include a mouse, a keyboard, a touch screen, a computer, a processor-accessible memory device, a network-interface-card or network-interface circuitry, or any device or combination of devices from which a desired selection, desired information, instructions, or any other data is input to the data

    processing device system

    110. The data input-

    output device system

    120 may include a user-activatable control system that is responsive to a user action. The data input-

    output device system

    120 may include any suitable interface for receiving a selection, information, instructions, or any other data from other devices or systems described in various ones of the embodiments.

  • The data input-

    output device system

    120 also may include an image generating device system, a display device system, an audio generating device system, an audio transducer, a computer, a processor-accessible memory device, a network-interface-card or network-interface circuitry, or any device or combination of devices to which information, instructions, or any other data is output by the data

    processing device system

    110. The input-

    output device system

    120 may include any suitable interface for outputting information, instructions, or data to other devices and systems described in various ones of the embodiments. If the input-

    output device system

    120 includes a processor-accessible memory device, such memory device may or may not form part or all of the

    memory device system

    130.

  • The user interfaces of at least

    FIG. 7A

    , 7B, 8A, 8B, 9, 10, or a combination thereof may be implemented as part of the data input-

    output device system

    120, according to various ones of some embodiments of the present invention.

  • FIG. 2

    is a block diagram of a particular implementation of the system of

    FIG. 1

    in accordance with some embodiments of the present invention. In

    FIG. 2

    , there is illustrated a

    system

    214 for providing location-based digital stories to a plurality of users of mobile devices at a plurality of locations. As used herein the phrase digital story relates to, among other things, a telling of a story with any of a variety of digital multimedia types, including digital audio, digital graphics images, including digital still photographs, and digital video images and animations. It will be understood that a digital story may relate to, for example, travel information, historic information or business information. As used herein, the phrase digital story experience relates to, among other things, the presentation of a digital story on a device, such as a smart phone or tablet computer, using digital audio, or digital still images including graphics, or digital video images, or a combination of digital audio, digital still images, and digital video images.

  • In

    FIG. 2

    , a first mobile device, such as

    smart phone

    300A located at a first location A, and a second mobile device, such as

    smart phone

    300B, located at a second location B, is communicatively connected with a service provider 280 using a

    cellular provider network

    240. The

    cellular provider network

    240 provides both voice and data communications using transmission devices located at cell towers throughout a region. The

    cellular provider network

    240 is communicatively connected to a

    communication network

    250, such as the Internet.

  • It will be understood that each mobile device, such as

    smart phone

    300A, is typically owned or leased by a particular user. The

    smart phone

    300A can be used to present a digital story to a single user or to a group of users who are viewing a display of the

    smart phone

    300A, or listening to audio provided by the

    smart phone

    300A. The user or group of users may be situated in a vehicle such as a car, for example, and the digital story can be provided by the vehicle's audio system using, for example, a Bluetooth™ connection to transmit the audio from the

    smart phone

    300A to the vehicle's audio system as is well-known in the art.

  • It will be understood that

    system

    214 typically includes many other mobile devices, in addition to

    smart phone

    300A and

    smart phone

    300B. It will be understood that the

    system

    214 can include multiple

    cellular provider networks

    240, for example networks provided by companies such as Verizon, AT&T, and Sprint, which can be communicatively connected to the

    communication network

    250.

  • System

    214 also includes one or

    more computers

    218 which communicate with the

    communication network

    250 and service provider 280 via a communication service provider (CSP) 220. In some embodiments,

    computer

    218 enables remote users, who might not be able to travel to the locations where the location-specific stories are provided, to obtain a virtual experience from their home. For example, the user of one of the

    computers

    218 can use a computer mouse to change their virtual location on a digital map displayed in a window on the display of the

    computer

    218. The

    computer

    218 can then be used, rather than one of the

    smart phones

    300A, 330B, to provide a virtual digital story experience to a remote user, who may be located in another country, for example.

  • The

    communications network

    250 enables communication with a service provider 280. Service provider 280 includes a

    web server

    282 for interfacing with

    communications network

    250. In addition to interfacing with

    communications network

    250,

    web server

    282 transfers data to a

    computer system

    286 which manages data associated with various customers and digital story content associated with one or more themes at a plurality of locations.

  • It will be understood that the

    system

    214 can include a plurality of service providers 280, which provide different services and can support different regions of the world.

  • The

    computer system

    286 includes an account manager 284, which runs software to permit the creation and management of individual customer (e.g. user) accounts, including user profiles, which are stored in customer database 288. Thus, customer database 288 provides a network-accessible storage device system which stores profiles for a plurality of users of mobile devices, such as

    smart phones

    300A and 300B. In some embodiments, the user profile information stored in customer database 288 can include personal information such as the user's nickname, full name and address, demographic information, and interests. In some embodiments, the demographic information in the user profile can include the approximate age of the user, whether the user is male or female, or a language preference of the user, since the user may be visiting from another country. In some embodiments, the user profile information stored in customer database 288 can also include billing information such as credit card information, and authorization information that controls access to the information in the customer database by third parties. In some embodiments, the user profile information stored in customer database 288 includes data which indicates which digital stories have been experienced by the user, including the theme, location, and the date and time that the digital story was presented to the user, as will be described later in reference to

    FIG. 4

    .

  • The account manager 284 also permits the uploading and management of collections of digital story content data for providing digital story experiences, such as digital audio recordings, digital still images, and digital video images associated with various story themes and locations, which is stored in

    content database

    290. Thus,

    content database

    290 provides an example of a network-accessible storage device system which stores data for providing a plurality of location-specific digital stories related to a common theme at a plurality of locations. In some embodiments,

    computers

    218 are used by content curators associated with a plurality of venues, to provide, manage, and update the digital story content associated with location-specific digital stories associated with the venues, which is stored in a

    content database

    290.

  • In some embodiments, users of mobile devices, such as

    smart phones

    300A and 300B, capture digital images during a digital story experience at one or more locations. In some embodiments, the captured digital images are uploaded and stored in the customer database 288.

  • Content database

    290 stores data which identifies the geographic locations associated with location-specific digital stories that can be provided using the system depicted in

    FIG. 2

    . The geographic location data can use, for example, GPS coordinate boundaries of an area, such as a geofence, or object identifying feature points in images captured in an area. The geographic location data can also use one or more identifiers for wireless communications antennas, which are located in the geographic area associated with the location-specific digital story.

  • In some embodiments, the

    content database

    290 also stores guidance information, which is used to suggest additional locations for digital story experiences that may be of interest to users, and to guide them to the suggested locations. In some embodiments, the guidance information also provides guidance to locations which are likely to be considered to be good “photo spots” by the particular user of one of the

    smart phones

    300A, 300B. In some embodiments, the guidance information includes at least one image related to the suggested location. For example, the guidance can include a photo of a particular object, along with a map or an audio or text message that provides a general direction, or other clues, for locating the object. In some embodiments, the guidance can also include text or graphics which instruct the user to capture an image of their group near the object, and to upload the captured image to the service provider 280.

  • In some embodiments, guidance for suggested digital story experience locations, or guidance for capturing images at suggested locations, is provided in a manner so as to dynamically alter the experience responsive to user-captured images or other input received from the user during the digital story experience. In this way, the digital story experience automatically adapts to a particular user's situation and conditions. For example, an uploaded digital still image captured by a user at one point in the digital story experience can indicate that the user is accompanied by children. This can result modifications to the digital story experience in order to be more suitable for a younger audience. In another example, an uploaded digital still image captured by a user can indicate that it is raining or snowing. As a result, the digital story experience can be tailored to indoor venues.

  • The

    computer system

    286 includes a

    processor

    292, which can be used to analyze the pixel data of some of the customer images which are uploaded and stored in the customer database 288. For example, in some embodiments the

    processor

    292 can analyze the pixel data in order to detect faces in one or more customer images using a variety of known face detection algorithms. In some embodiments, the face detection algorithm determines the number of faces that can be detected in an image, in order to determine how many people are depicted in the image. In some embodiments, the face detection algorithm determines if the detected faces are female faces or male faces. In some embodiments, the face detection algorithm determines the approximate ages of the people whose faces have been detected. It will be understood that the term approximate age, as used herein, relates to categorizing one or more faces into broad, age-related categories. These approximate age categories can include, for example, babies, young children, teens, younger adults, and older adults (i.e. senior citizens).

  • In some embodiments, the

    processor

    292 in the

    computer system

    286 can analyze the pixel data of some of the customer images in order to determine whether one or more landmarks are depicted in the images. Such image recognition algorithms are used, for example, as part of the Google Goggles™ Application (APP) for the Android mobile platform, which is available from Google, Mountain View, Calif.

  • In some embodiments, the

    processor

    292 in the

    computer system

    286 creates the information needed to provide a unique photo product for a particular user of one of the

    smart phones

    300A, 300B by incorporating images captured by the user during one or more digital story experiences with pre-stored information, such as professional images and textual descriptions. This enables a photo product to be automatically created by placing the user-captured images in predetermined locations in the photo product, so that they are associated with the pre-stored information. For example, a first image captured by the user near the Lincoln Memorial in Washington D.C. can be associated with pre-stored information which describes the presidency of Abraham Lincoln and provides an image related to his Gettysburg Address speech. A second image, captured by the user near the White House, can be associated with pre-stored information that describes or depicts the current president. This enables a photo product to be automatically produced using the user-captured images at two different locations, along with the pre-stored information associated with the two different locations.

  • In some embodiments, the

    processor

    292 in the

    computer system

    286 modifies the appearance of one or more of the captured digital images, so that it has a more suitable appearance when incorporated into the photo product. In some embodiments, faces in the captured digital image can be detected, and the

    processor

    292 can crop the digital image to enlarge the size of the faces and remove some of the distracting background surrounding the face.

  • In some embodiments, captured digital images can be processed by the

    processor

    292 to provide a different image appearance. For example, captured digital images can be processed so that the newly captured images appear to be older photographs, such as daguerreotypes, so that they have a more suitable appearance when positioned in a photo product in association with an image related to the Gettysburg Address. As another example, the captured digital images can be processed to provide an image having a different color tint, contrast, or external shape, so that it has a more suitable appearance when positioned in a photo product as part of an advertisement for a product or service. As another example, the captured digital images can be processed to provide a cartoon effect or a coloring book effect so that they have a more suitable appearance when positioned in a children's photo product in association with pre-stored cartoons or as part of a page which provides a “coloring book” for a child.

  • In some embodiments, captured digital images can be processed by the

    processor

    292 to provide a different image appearance in response to the image content of the captured image. For example, the

    processor

    292 can determine the location of multiple faces within the image and automatically crop the captured digital image using different aspect ratios for different captured images in order to produce a more suitable appearance in the photo product.

  • In some embodiments, the captured digital images can be processed by the

    processor

    292 to provide a different image appearance in response to the location where the image was captured. For example, the

    processor

    292 can provide a “cartoon” effect for images captured in a particular location, such as images captured in a particular park or playground.

  • In some embodiments, the captured digital images can be processed by the

    processor

    292 to provide a different image appearance in response to both the image content of the captured image and the location where the image was captured. For example, the

    processor

    292 can provide a color-based object extraction algorithm (e.g. “green screen” effect”) on images captured in a particular location when the

    processor

    292 can determine that a background area of the captured image is a predetermined color (e.g. green).

  • In some embodiments, the

    communications network

    250 enables communication with a

    fulfillment provider

    270. The

    fulfillment provider

    270 produces and distributes enhanced photo products. The

    fulfillment provider

    270 includes a

    fulfillment web server

    272, and a

    fulfillment computer system

    276 that further includes a

    commerce manager

    274 and a

    fulfillment manager

    275. Fulfillment requests received from service provider 280 are handled by

    commerce manager

    274 initially before handing the requests off to

    fulfillment manager

    275.

    Fulfillment manager

    275 determines which equipment is used to fulfill the ordered good(s) or services such as a

    digital printer

    278 or a

    DVD writer

    279. The

    digital printer

    278 represents a range of color hardcopy printers that can produce various photo products, including prints and postcards. The hardcopy prints can be of various sizes, including “poster prints”, and can be sold in frames. The

    DVD writer

    279 can produce CDs or DVDs, for example PictureCDs, having digital still and video images and application software for using the digital images.

  • After fulfillment, the photo products are provided to the user of the

    smart phones

    300A, 300B, or to a recipient designated by the user of the

    smart phones

    300A, 300B. In some embodiments, the photo products are provided using a

    transportation vehicle

    268. In other embodiments, the photo products are provided at a retail outlet, for pickup by the user of the

    smart phones

    300A, 300B, or by a designated recipient.

  • In some embodiments,

    system

    214 also includes one or

    more kiosk printers

    224 which communicate with the

    communication network

    250 and service provider 280 via a communication service provider (CSP) 222. This enables printed photo products, created by the service provider 280 using digital images captured by

    smart phones

    300A, 300B, to be provided at retail establishments. The retail establishments, which can be for example gift shops, may be located at or near some of the locations where the location-specific digital story experiences are provided. In some embodiments, the user of the

    smart phones

    300A, 300B receives the photo product at a discount, or free of charge, in order to encourage the user to enter the store where they will potentially purchase other items. In some embodiments, the photo product includes advertising of merchants which are located near the location of the

    fulfillment provider

    270 or the

    kiosk printer

    224.

  • In some embodiments, the service provider 280, or the

    fulfillment provider

    270 can create examples of various photo products that can be provided by the

    fulfillment provider

    270. The examples can be communicated to the

    smart phone

    300 or the

    customer computer

    218, where the examples can be displayed to the user.

  • In some embodiments, the customer database 288 at the service provider 280 stores user billing information. The billing information can include a payment identifier for the user, such as a charge card number, expiration date, user billing address, or any other suitable identifier. In some embodiments, the customer database 288 also provides long-term storage of the uploaded images for some or all of the users. In some embodiments, stored user images and digital story content is accessible (e.g., viewable) via the Internet by authorized users.

  • When a photo product is purchased by the user of the

    smart phones

    300A, 300B, the service provider account manager 284 can communicate with a remote financial institution (not shown) to verify that the payment identifier (e.g., credit card or debit card number) provided by the customer is valid, and to debit the account for the purchase. Alternatively, the price of the photo product can be added to the user's monthly bill paid to the service provider 280 or to their mobile phone operator.

  • It will be understood that in some embodiments, the functions of the service provider 280 and the

    fulfillment provider

    270 can be combined, for example, by using a common web server for both

    web server

    282 and

    web server

    272 or by combining the functions of the account manager 284, the

    commerce manager

    274, and the

    fulfillment manager

    275. It will be understood that in some embodiments, the customer database 288 or the

    content database

    290 can be distributed over several computers at the same physical site, or at different sites.

  • With respect to

    FIG. 1

    , any of various combinations of the components of

    FIG. 2

    may form all or part of the various components of

    FIG. 1

    , according to respective various embodiments of the present invention. For example, in some embodiments, the

    system

    100 corresponds only to the

    smart phone

    300A or the

    smart phone

    300B. In other embodiments, the

    system

    100 corresponds to the service provider 280, where the

    processor

    292 may correspond to the data

    processing device system

    110, the

    databases

    288 and 290 maybe stored in the

    memory device system

    130, the account manager and web server may be applications stored in the

    memory device system

    130, and the

    communication network

    250 may interface with the input-

    output device system

    120. In some embodiments, the

    system

    100 corresponds to the

    smart phone

    300A and the service provider 280, such that, for example, the CPU of the

    smart phone

    300A and the

    processor

    292 both form part of the data

    processing device system

    110. In some embodiments, the

    system

    100 corresponds to the

    fulfillment provider

    270. In some embodiments, the

    system

    100 corresponds to the entirety of the

    system

    214. Accordingly, it can be seen that the present invention is not limited to any particular correspondence configuration between the system of

    FIG. 1

    and the system of

    FIG. 2

    . The same is true with respect to any particular correspondence configuration between the system of

    FIG. 1

    and the system of

    FIG. 3

    , which will now be discussed.

  • FIG. 3

    depicts a block diagram of a

    smart phone

    300 used in the system of

    FIG. 2

    , according to some embodiments of the present invention. It will be understood that other types of mobile devices, such as tablet computers and wireless digital cameras, can be used in the system described in reference to

    FIG. 2

    .

  • In some embodiments, the

    smart phone

    300 is a portable, battery operated device, small enough to be easily handheld by a user. The

    smart phone

    300 can utilize an operating system such as the iOS operating system developed by Apple Inc, Sunnyvale, Calif. or the Android mobile platform, developed by Google, Mountain View, Calif. The operating system can be stored in

    firmware memory

    328 and utilized by digital processor 320 (which may, e.g., form at least part of the data

    processing device system

    110 in

    FIG. 1

    ). The

    smart phone

    300 can run applications (i.e. “apps”) which are pre-installed when the smart phone is purchased, or are downloaded from the service provider 280. The

    digital processor

    320 may use, for example, the Android software stack, which includes a Linux-based operating system, middleware, and applications. This permits additional software applications (“apps”) to be downloaded from the service provider 280, stored in the

    firmware memory

    328, and used to provide various functions, including the digital story experiences to be described in reference to

    FIG. 4

    .

  • The

    smart phone

    300 includes a camera module including a

    lens

    304 which focuses light from a scene (not shown) onto an

    image sensor array

    314 of a

    CMOS image sensor

    310. The

    image sensor array

    314 can provide color image information using the well-known Bayer color filter pattern. The

    image sensor array

    314 is controlled by

    timing generator

    312, which also controls a

    flash

    302 in order to illuminate the scene when the ambient illumination is low. The

    image sensor array

    314 can have, for example, 2560 columns×1920 rows of pixels.

  • The

    smart phone

    300 can also capture video clips by summing multiple pixels of the

    image sensor array

    314 together (e.g. summing pixels of the same color within each 4 column×4 row area of the image sensor array 314) to create a lower resolution video image frame. The video image frames are read from the

    image sensor array

    314 at regular intervals, for example using a 30 frame per second readout rate.

  • The analog output signals from the

    image sensor array

    314 are amplified and converted to digital data by the analog-to-digital (A/D)

    converter circuit

    316 in the

    CMOS image sensor

    310. The digital data is stored in a

    DRAM buffer memory

    318 and subsequently processed by a

    digital processor

    320 controlled by the firmware stored in

    firmware memory

    328, which can be flash EPROM memory. The

    digital processor

    320 includes a real-

    time clock

    324, which keeps the date and time even when the

    smart phone

    300 and

    digital processor

    320 are in their low power state. The

    digital processor

    320 produces digital images that are stored as digital image files using image/

    data memory

    330. The phrase “digital image” or “digital image file”, as used herein, refers to any digital image or digital image file, such as a digital still image or a digital video file.

  • The processed digital image files are stored in the image/

    data memory

    330, along with the date/time that the image was captured provided by the real-

    time clock

    324 and the location information provided by a location determination unit, such as

    GPS receiver

    360.

  • In some embodiments, the

    digital processor

    320 performs color interpolation followed by color and tone correction, in order to produce rendered sRGB image data. In some embodiments, the

    digital processor

    320 can also provide various image sizes selected by the user. In some embodiments, rendered sRGB image data is then JPEG compressed and stored as a JPEG image file in the image/

    data memory

    330. In some embodiments, the JPEG file uses the so-called “Exif” image format. This format includes an Exif application segment that stores particular image metadata using various TIFF tags. Separate TIFF tags are used to store the date and time the picture was captured and the GPS co-ordinates, as well as other camera settings such as the lens f/number.

  • In some embodiments, the

    CMOS sensor

    310 is used to capture QR codes or bar codes which are located at a visitor information center or at an experience location. In some embodiments, the captured image of the QR code or the bar code can be used, for example, to determine the URL for an app which is downloaded to the

    smart phone

    300 from the service provider 280 in order to implement some or all of the steps which will be described in relation to

    FIG. 4

    ,

    FIG. 5

    , and

    FIG. 12

    . In some embodiments, the captured image of the QR code or the bar code can be used to initiate the purchase of various products or services of interest to the visitor at an experience location.

  • In some embodiments, the

    digital processor

    320 also creates a low-resolution “thumbnail” size image or “screennail” size image, which can be stored in

    RAM memory

    322 and supplied to a

    color display

    332, which can be, for example, an active matrix LCD or organic light emitting diode (OLED) touch screen display. After images are captured, they can be reviewed on the color

    LCD image display

    332 by using the thumbnail image data.

  • The graphical user interface displayed on the

    color display

    332 is controlled by user controls 334. The graphical user interface enables the user to control the functions of the

    smart phone

    300, for example, to make phone calls, to launch and control apps, to capture images, and to send and view text messages, email messages and videos. User controls 334 can include a touch screen overlay on the

    color display

    332, as well as buttons, keyboard switches, rocker switches, or joysticks. In some embodiments, the user controls 334 can include voice recognition or image based gesture recognition.

  • An

    audio codec

    340 connected to the

    digital processor

    320 receives an audio signal from a

    microphone

    342 and provides an audio signal to a

    speaker

    344 and a headphone jack (not shown). These components can be used both for telephone conversations and to record and playback digital audio. The digital audio can be played back as part of a digital story experience, to be described later in reference to

    FIG. 4

    . In addition, a vibration device (not shown) can be used to provide a silent (e.g., non-audible) notification of an incoming phone call or message, or to inform a user that they have entered a location, such as a geofence, where a digital story experience can be provided.

  • In some embodiments, a digital audio signal can be provided from the

    digital processor

    320 to the

    wireless modem

    350, which can transmit the digital audio signal over an

    RF channel

    352 using, for example, the well-known Bluetooth protocol. The digital audio signal can be received by a wireless modem in a vehicle audio system (not shown), which can amplify and play the audio using speakers installed in the vehicle. This permits the driver and passengers in the vehicle to listen to the audio that is presented as part of the digital story experience.

  • In some embodiments, a memory (which may, e.g., form at least part of the

    memory device system

    130 in

    FIG. 1

    ) in the

    smart phone

    300, such as

    firmware memory

    328, can be used to store a variety of music using standard audio files, such as the well-known MP3 audio format, so that the

    smart phone

    300 serves as a music player. In some embodiments, music files consistent with the theme of the digital story experience can be automatically downloaded from the service provider 280 and stored in

    firmware memory

    328. The music files can then be automatically played when the

    smart phone

    300 is not at a digital story experience location, as will be described later in reference to step 890 in

    FIG. 12

    . For example, an MP3 audio file for the song “John Brown's body” can be automatically downloaded when the “Fight for rights” theme is selected, as will be described later in reference to

    FIG. 6A

    , since the song “John Brown's body” is consistent with the “fight for rights” theme. As another example, the song “(Get your kicks on) Route 66” is consistent with a digital story experience along historic route 66.

  • A dock interface 362 can be used to connect the

    smart phone

    300 to a dock/

    charger

    364, which is optionally connected to

    customer computer

    218. The dock/

    recharger

    364 can be used to recharge the batteries (not shown) in the

    smart phone

    300. The dock interface 362 can conform to, for example, the well-know USB interface specification. Alternatively, the interface between the

    smart phone

    300 and the

    customer computer

    218 can be a wireless interface, such as the well-known Bluetooth wireless interface or the well-know 802.11 wireless interface. In some embodiments, the dock interface 362 can be used to transfer data for providing a plurality of location-specific digital stories to the

    camera phone

    300 prior to leaving on a vacation trip.

  • The

    digital processor

    320 is communicatively connected to a

    wireless modem

    350, which enables the digital

    smart phone

    300 to transmit and receive information via an

    RF channel

    352. The

    wireless modem

    350 communicates over a radio frequency (e.g. wireless) link with the

    cellular provider network

    240, described earlier in reference to

    FIG. 2

    , which can utilize, for example, a CDMA network, a 3GSM, a 4 GSM network, or other wireless communication networks. In some embodiments, the

    wireless modem

    350 also communicates using local area wireless interface standards, such as the well-know 802.11 interface standards or the well-known Bluetooth standard.

  • It will be understood that the functions of

    digital processor

    320, because it may form at least part of the data

    processing device system

    110, can be provided using a single programmable processor or by using multiple programmable processors, including one or more digital signal processor (DSP) devices. Alternatively, the

    digital processor

    320 can be provided by custom circuitry (e.g., by one or more custom integrated circuits (ICs) designed specifically for use in smart phones), or by a combination of programmable processor(s) and custom circuits, just like the data

    processing device system

    110. It will be understood that communicative connections between the

    digital processor

    320 and some or all of the various components shown in

    FIG. 3

    can be made using a common data bus. For example, in some embodiments the connection between the

    digital processor

    320, the

    DRAM buffer memory

    318, the image/

    data memory

    330, and the

    firmware memory

    328 can be made using a common data bus.

  • FIG. 4

    is a flow diagram depicting steps for providing location-specific digital story experiences related to a common theme at different locations, according to some embodiments of the present invention. In some embodiments, the steps are performed by the service provider 280 in

    FIG. 2

    . In other embodiments, some or all of the steps are performed by the

    smart phone

    300 in

    FIG. 3

    .

  • In store location specific stories step 400 of

    FIG. 4

    , data for a plurality of location-specific digital stories related to at least one common theme is stored on a network-accessible storage device system, such as

    content database

    290 in

    FIG. 2

    . In some embodiments, the digital stories are stored in association with GPS information, such as geofences, which indicate the locations where the digital stories are to be presented. A number of variations of the digital story for the same theme are stored for each location. This is done because some users at any particular location will have already experienced digital stories for the same theme at one or more other locations, while other users will experience their first digital story for the theme at this particular location.

  • In some embodiments, location-specific digital stories are stored for a plurality of different themes in the

    content database

    290. For example, the plurality of themes for locations in the Rochester, N.Y. region could include digital stories for a first theme related to the “Fight for Rights”, for a second theme related to “Life along the Erie Canal”, and for a third theme related to “Winemaking in the Finger Lake region”.

  • FIG. 6

    is an example of a

    map

    530 depicting

    different experience locations

    531, 532, and 533 at which location-specific digital story experiences related to a common theme can be provided, according to some embodiments of the present invention. In this example, the common theme is a “fight for rights” theme. The map identifies three locations in upstate New York, including experience location “1” 531 near Seneca Falls, experience location “2” 532 near Skaneateles, and experience location “3” 533 near Cortland.

  • In store user profiles step 405 of

    FIG. 4

    , profiles are developed for a plurality of users of mobile devices, such as the users of

    smart phones

    300A, 300B in

    FIG. 2

    , and stored in customer database 288. The profiles indicate whether each user has been presented with one or more digital stories at one or more digital story experience locations. If a user has been presented with a digital story, the theme, location, and date/time of each digital story presentation are recorded in their user profile.

  • In some embodiments, the user profile includes information derived from responses given by the user during their digital story experiences. For example, the user may have been asked to select a particular character from a plurality of characters that could be used to present stories. The user profile stores data which identifies the user-selected character, so that the same character can automatically be featured in a related digital story experience at another location.

  • In some embodiments, the user profile is stored in a memory of the

    smart phone

    300, such as image/

    data memory

    330 or

    firmware memory

    328.

  • In determine

    user location step

    410, the current location of the mobile device for a particular user is determined. This can be done, for example, by using the

    GPS receiver

    360 in the smart phone 300 (see

    FIG. 3

    ) to determine the GPS coordinates of the

    smart phone

    300, and by using the

    digital processor

    320 in the

    smart phone

    300 to communicate the GPS coordinates to the service provider 280 using the

    wireless modem

    350. It will be understood that in some embodiments, the GPS coordinates of experience locations can be provided by the service provider 280 and stored in a memory of the smart phone 300 (such as image/

    data memory

    330 or firmware memory 328) so that the

    digital processor

    320 in the

    smart phone

    300 can determine if the

    mobile phone

    300 is at an experience location.

  • In at an

    experience location test

    415, a determination is made as to whether the user's current location corresponds to one of the plurality of locations at which location-based digital story experiences can be provided by system 214 (yes to test 415) or is outside this plurality of locations (no to test 415). This can be tested by determining, for example, if the user's

    smart phone

    300 has entered into the geofence for a particular experience location. In some embodiments, this determination is made by service provider 280 using

    processor

    292. In some embodiments, this determination is made by

    smart phone

    300 using

    digital processor

    320.

  • If the user's current location does not correspond to one of the plurality of story-telling experience locations (no to test 415), the process proceeds to provide directions step 420. In provide directions step 420, directions are provided in order to direct the user to one or more nearby experience locations where a digital story experience can be provided. For example, the map shown in

    FIG. 6

    can be used to provide directions to the user. In some embodiments, standard mapping programs, such as Google Maps, already installed on the

    smart phone

    300 can be used to provide directions to the user. In some embodiments, images showing a landmark can be used to provide directions to the user.

  • In some embodiments, even though the user's current location does not correspond to one of the plurality of story-telling experience locations (no to test 415), the

    smart phone

    300 provides a menu of themes for location-specific stories that can be provided at nearby experience locations, and the user is permitted to select a specific theme. Once the user selects the specific theme, the user is directed to one or more nearby locations associated with the user-selected theme. For example, the user may select a specific theme using their

    smart phone

    300 before they begin driving their vehicle to a nearby experience location associated with the theme they have selected. As the user drives their vehicle, the

    smart phone

    300 can direct the user to one of the nearby experience locations by displaying a map and providing audio guidance for which roads to take and where to turn. When the vehicle reaches one of the experience locations associated with a digital story (e.g. when the vehicle enters a geofence), the associated digital story can then be automatically provided, for example by presenting an audio signal which plays over the vehicle's stereo audio system using a Bluetooth connection between the

    smart phone

    300 and the vehicle's stereo audio system.

  • If the user's current location does correspond to one of the plurality of story-telling experience locations (yes to test 415), the process proceeds to select location-

    specific story step

    425. In select location-

    specific story step

    425, one of a plurality of possible location specific digital stories is selected and provided to the user's mobile device, for example by transmitting digital data for the selected digital story from the service provider 280 to the user's

    smart phone

    300. In some embodiments, the plurality of possible location specific digital stories is stored in a memory of the smart phone 300 (such as image/

    data memory

    330 or firmware memory 328) at an earlier time, and is selected by the

    digital processor

    320.

  • FIG. 5

    is a flow diagram depicting steps for the select location-

    specific story step

    425 in

    FIG. 4

    , according to some embodiments of the present invention. In access

    user profile step

    500, the stored user profile for the user is read from a memory which stores profiles for a plurality of users of mobile devices, such as customer database 288 in

    FIG. 2

    . The stored user profile provides the user's history concerning stories that the user viewed at earlier times. In viewed

    earlier story test

    505, a determination is made as to whether the user profile indicates that the user has already viewed a location-specific digital story at another experience location which relates to the theme of a digital story available at the current experience location.

  • If the user has not viewed a location-specific story at another experience location at an earlier time (no to test 505), in provide first

    digital story step

    510, data for a first digital story which can be presented at the user's current location is provided. In some embodiments, the data for the first digital story is transmitted from the service provider 280 to the user's mobile device, such as

    smart phone

    300A.

  • In some embodiments, the first location-specific story is automatically initiated when the user reaches the location associated with the story, for example when the user enters the geofence for the particular digital story experience location. In other embodiments, the location-specific story is determined by presenting to the user a menu which permits the user to select one of a plurality of first digital stories that can be provided at the current location. The plurality of stories can include stories on the same general theme (e.g. a women's rights theme) which are narrated by different characters, such as a young girl character and an older woman character. The plurality of stories can also include stories having different themes (e.g. a first story having a women's rights theme and a second story related to the Erie Canal).

  • In some embodiments, the first location-specific story is selected responsive to demographic information stored in the user profile. For example, in some embodiments the user profile can store the preferred (e.g., native) language of the user, and the user profile can be accessed in order to provide a first digital story which is presented in the preferred language of the user. In some embodiments, the user profile can store the approximate age of the user, and the user profile can be accessed in order to provide a first digital story which is appropriate for the age of the user. For example, different first digital stories may be provided to children, teenagers, adults, and senior citizens.

  • FIG. 7A

    depicts an example of a user interface screen for selecting a theme for location-specific stories using

    color display

    332 of

    smart phone

    300, according to some embodiments of the present invention.

    Story greeting window

    612 includes a

    user photo

    614, which can be displayed using image data from the stored user profile for the particular user of the

    smart phone

    300.

    Story greeting window

    612 also includes a

    message window

    616 which asks the user to select one of the stories in

    story selection window

    620.

  • Story selection window

    620 permits the user to select a

    wine story

    624 having wineries as the theme, as depicted using a

    winery story image

    622.

    Story selection window

    620 also permits the user to select an

    Amelia story

    634 having the fight for rights as the theme, which is told by a historical figure named Amelia Jenks Bloomer, as depicted using

    Amelia story image

    632.

  • Returning to

    FIG. 5

    , in introduce theme &

    character step

    515, the theme and character of the first digital story are introduced, in order for the user to understand the overall context of the story. For example, if the theme of the story is the women's rights movement, the first digital story can discuss the historic context of the story and provide background information on the character who is narrating the story.

  • FIG. 7B

    depicts a user interface screen that begins to introduce the theme and character of the Amelia fight for rights story, which is one of the themes that can be selected by the user, using the user interface screen depicted in

    FIG. 7A

    . The user interface screen depicted in

    FIG. 7B

    includes the

    Amelia story image

    632, since the story will be told by the historical figure Amelia Jenks Bloomer. The user interface screen depicted in

    FIG. 7B

    also includes a

    story introduction window

    640, which provides text that introduces the theme and the main character of the story, who is named Amelia. It will be understood that the digital story told by Amelia can include still and video images, as well as audio narration, sound effects, and music. For example, in some embodiments the text in

    story introduction window

    640 could be provided as an audio narration.

  • The user interface screen depicted in

    FIG. 7B

    also includes a

    map

    642 depicting other locations where the user can view location-specific stories concerning the same theme (e.g. the fight for rights theme). The user interface screen depicted in

    FIG. 7B

    also includes a “Follow Amelia”

    icon

    646 which the user can select to continue experiencing Amelia's story, and an “Another theme”

    icon

    648 which the user can select in order to view a menu that permits the user to select another story theme or character.

  • Returning to

    FIG. 5

    , if the user has already viewed a location-specific story at another location (yes to test 505), in provide second

    digital story step

    520, data for a second digital story which can be presented at the user's current location is provided. In some embodiments, the data for the second digital story is transmitted from the service provider 280 to the user's mobile device, such as

    smart phone

    300A. In some embodiments, the data for the second digital story is downloaded at an earlier time (e.g. when the data for the first digital story was downloaded at another location) and stored in a memory of the user's mobile device (e.g. image/

    data memory

    330 or firmware memory 328) in the

    smart phone

    300 in

    FIG. 3

    ) so that the data for the second digital story is immediately available when the user moves to the experience location associated with the digital story (i.e. the current location).

  • In continue theme &

    character step

    525, the theme and character which were used in the first digital story are continued, so that the digital story provided at the current location builds on the story already provided at the earlier experience location or locations. For example, if the theme of the story relates to the development of the Eric Canal, the second digital story can describe how food and materials which were loaded onto the barge at a different location, which was visited by the user at an earlier time, are being unloaded from the same barge at the current location, so that the food and materials can be sold at a store near the current location.

  • Returning to

    FIG. 4

    , in

    present story step

    435, the location-specific digital story selected in

    step

    425 is presented to the user. The digital story can be presented to the user using a variety of story-telling methods, such as audio stories, text-based stories, video stories, and augmented-reality stories. In some embodiments, the user selects a preferred story-telling method from a menu offering a variety of choices, which is stored in their user profile during store

    user profiles step

    405. For example, the user can select a text based story, rather than a story that includes audio, if they are hearing-impaired or concerned about distracting others. Alternatively, if the user is driving a vehicle, the digital story can be told primarily using audio narration, sound effects, and music. However, still and video images could also be provided, for viewing by passengers in the vehicle.

  • It will be understood that in some embodiments, the presentation of a digital story is initiated when the user's mobile device, such as

    smart phone

    300, enters a geofence and continues even if the mobile device leaves the geofence, until the presentation is completed (e.g. until a complete audio clip has been played). In some other embodiments, the presentation is terminated when the mobile device leaves the geofence. In some embodiments, the digital story associated with the geofence can include associated data which indicates whether or not the digital story presentation should continue if the mobile device leaves the geofence.

  • In some embodiments, the digital story-telling method can be automatically selected by a processor in system 214 (such as

    processor

    292 at service provider 280 or

    digital processor

    320 in smart phone 300) based on measurements performed by one of the mobile devices, such as

    smart phone

    300. For example, in some embodiments the

    digital processor

    320 in the

    smart phone

    300 can determine the user's speed from measurements made by the

    GPS receiver

    360. If the user's average speed is greater than a threshold (e.g. greater than 10 miles per hour), the story-telling method can be automatically switched to an audio mode, since the user is likely in a moving vehicle. In some embodiments, the

    digital processor

    320 in the

    smart phone

    300 can determine the ambient noise level from measurements made using the

    mic

    342 and

    audio codec

    340. If the noise level is greater than a threshold (e.g. greater than 90 dB), the story-telling method can be automatically switched to a mode which displays text subtitles, since it may be difficult for the user to hear audio messages.

  • FIG. 8A

    depicts a user interface screen for presenting a first location-specific digital story associated with a first theme at a first geographic location, according to some embodiments of the present invention. The user interface screen example shown in

    FIG. 8A

    is used to present a first location-specific

    digital story

    660 associated with a particular theme (i.e. the fight for rights) at the Seneca Falls experience location “1” 531 shown in

    FIG. 6

    . The first location-specific story is presented when the user selects the “Follow Amelia”

    icon

    646 in

    FIG. 7B

    , and begins by assuming that the user has not yet viewed any digital stories associated with the fight for rights theme. It will be understood that the

    digital story

    660 can be provided using one or more of audio, text, graphics, still images, video images, and augmented-reality images.

  • In some embodiments, an augmented reality character is used to present at least a portion of the digital story. For example, when the user aims the

    camera lens

    304 of their smart phone 300 (see

    FIG. 3

    ) toward the statues in the center of

    scene

    662, augmented reality techniques can be used to cause the mobile device to display one or more augmented reality images of Amelia (or some other historical character in other digital stories) along with the image or images captured by the mobile device to make the statue of

    Amelia

    664 appear to “come to life” on the

    color display

    332 of the user's

    smart phone

    300 or other mobile device. The augmented reality character can be used to narrate the story, or to perform an action described in the story. The augmented reality character can be provided, for example, using the Aurasma augmented reality software provided by HP. Aurasma can be utilized by iPhone and Android apps in order to recognize images, symbols and objects captured by the camera in the smart phone. The captured images can be paired up with overlaid videos, animation, 3D still images or other date sources, known as “Auras”, and displayed to the user.

  • FIG. 8B

    depicts a user interface screen for presenting a second location-specific digital story associated with the first theme at the first geographic location, according to some embodiments of the present invention. The user interface screen example shown in

    FIG. 8B

    is used to present a second location-specific

    digital story

    670 associated with the same theme (i.e. the fight for rights) at the same Seneca Falls experience location “1” 531 shown in

    FIG. 6

    . The second location-specific story provides a continuation of a digital story that the user had viewed at an earlier time at a different experience location (e.g. the Cortland experience location “3” 533 in

    FIG. 6

    ). It will be understood that the second location-specific

    digital story

    670 uses an audio presentation, a video presentation, or a text presentation (which begins “I′m glad you've followed me here to Seneca Falls”) which is different than the first location-specific digital story 660 (which begins “Welcome! Let me begin the story of fighting for rights here in Seneca Falls”) described earlier in reference to

    FIG. 8A

    .

  • FIG. 9

    depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a second geographic location, according to some embodiments of the present invention. The user interface screen example shown in

    FIG. 9

    is used to present a location-specific

    digital story

    680 associated with the same theme (i.e. the fight for rights) at a second experience location, which is the Skaneateles experience location “2” 532 in

    FIG. 6

    . This location-specific digital story provides a continuation of the digital story that the user began in Seneca Falls experience location “1” 531. The

    digital story

    680 relates to a

    house

    682 which is historically important, since it was part of the underground railroad which allowed slaves to escape from the United States to Canada. In some embodiments, when the user positions the

    camera lens

    304 of their smart phone 300 (see

    FIG. 3

    ) towards the

    house

    682, augmented reality techniques can be used to reveal a virtual interior of the

    house

    682 on the

    color display

    332 of the user's

    smart phone

    300. The virtual interior can be used to help demonstrate the historical importance of the

    house

    682, by showing rooms where the Fuller Family lived, and revealing secret areas where slaves could be hidden.

  • FIG. 10

    depicts a user interface screen for presenting a location-specific digital story associated with the first theme at a third geographic location, according to some embodiments of the present invention. The user interface screen shown in

    FIG. 10

    is used to present a location-specific

    digital story

    690 associated with the same theme (i.e. the fight for rights) at a third experience location (i.e. the Cortland location “3” 533 in

    FIG. 6

    ). In this example, the location-specific

    digital story

    690 is presented in Spanish, since the user profile indicates that this user's preferred language is Spanish. In some embodiments, when the user positions the

    camera lens

    304 of their smart phone 300 (see

    FIG. 3

    ) towards the

    sign

    692, the English words on the sign can be recognized. The message on the sign can be read to the user in their preferred (e.g. native) language, using

    speaker

    344 in smart phone 300 (see

    FIG. 3

    ). In some embodiments, a virtual reality sign in the user's native language can be displayed on the

    color display

    332 of the user's

    smart phone

    300.

  • Returning to

    FIG. 3

    , in update

    user profile step

    440, the user profile is updated based on the user's digital story experience. For example, the user profile can be updated to indicate that the user has been presented with the first digital story of a particular theme (e.g. the fight for rights theme) at the first location (e.g. the Seneca Falls experience location “1” 531 in

    FIG. 6

    ). In some embodiments, the user is asked to respond to questions as a digital story is presented (for example, whether they would like to hear more about particular topics), and their answers are used to update their stored user profile (for example, to indicate their interest in the particular topics).

  • In some embodiments, the approximate period of time that the user spends in the location where they have experienced the digital story is recorded, to indicate the user's level of interest in the story location. This information can be used to help select future location-specific digital stories, for example by providing shorter or longer versions of a digital story based on whether the user spent a relatively long time, or a relatively short time, experiencing the current digital story or participating in other activities at the location.

  • In suggest next

    experience location step

    450 in

    FIG. 4

    , a suggested next digital story experience location is provided to the user's mobile device, such as

    smart phone

    300 in

    FIG. 2

    . In the some embodiments, the

    processor

    292 in the

    computer system

    286 at the service provider 280 determines the next suggested experience location, based on the user profile which has been updated in

    step

    440. For example, the suggested next experience location (after the user has been presented with the digital story at Seneca Falls experience location “1” 531) could be the Skaneateles experience location “2” 532 or the Cortland experience location “3” 533. In some embodiments, the user of the

    smart phone

    300 can reject the next suggested experience location. In response, the service provider 280 can determine and suggest an alternative next experience location. In some embodiments, the user can be presented with the option of being presented with a digital story on a different theme at the current experience location, or at a nearby experience location. For example, the user could be presented with the option of visiting a nearby experience location associated with the Seneca Wine Trail.

  • As described earlier, in provide directions step 420, directions are provided in order to direct the user to the suggested next experience location where a digital story can be provided. For example, the map shown in

    FIG. 6

    can be used to provide directions to the suggested next experience location. In some embodiments, the directions are provided by the character chosen by the user (e.g. Amelia) who can describe, in a historic context, the directions to the suggested next experience location.

  • In some embodiments, the account manager 284 and the customer database 288 in the

    computer system

    286 are used to determine user specific information related to the history of the user's interactions with the system, as well as any previously captured or determined information about the user's experience. For example, in a “vacation trip” scenario, the user may be known to be traveling from a starting location (e.g. their home town) to a particular vacation destination. Further, it may be known that the user has already visited several digital story experience locations and is interested in following a route that will take the user closer to their vacation destination.

  • In some embodiments, the suggested next experience location can be made based on responses or answers the user conveyed to questions provided by smart phone 300 (e.g., during a digital story). For example, the user can respond to questions about whether they are interested in a next experience related to a different theme, or whether they are interested in visiting specific areas, or are interested in obtaining a meal or lodging in a specific area, or at a specific restaurant or hotel. In some embodiments, the suggested next experience location can be stored and recalled at a later date.

  • In some embodiments, the suggested next experience location can be based on ambient conditions, such as the current weather, the time of day, or safety related ambient condition information. In some embodiments, ambient condition information (such as whether it is a rainy day) is used to automatically suggest an indoor location from the set of possible next locations. In some embodiments, the time of day can be used, in combination with the operating hours of some experience locations, to avoid suggesting locations that may be closed, or otherwise inaccessible, at the time the user is likely to arrive at the location. In some embodiments, the suggested experience location can be based on avoiding a severe weather storm in the area, or avoiding any fire, crime, or other safety related incident which may have occurred in the vicinity of one or more experience locations.

  • In some embodiments, the user is asked to capture one or more images of themselves, or their group, during

    present story step

    435. In some embodiments, the

    processor

    292 in the

    computer system

    286 at the service provider 280 determines the next possible image capture location based on the result of analyzing the pixel data of one or more of these user captured images. For example, the captured images can be analyzed to determine whether there are any children depicted in the captured digital image.

  • In some embodiments, some or all of the steps described in reference to

    FIG. 4

    are provided by the mobile device, such as

    smart phone

    300. In some embodiments, the service provider 280 provides a downloadable software application (“app”) over the

    communication network

    250 to the

    smart phone

    300, in order to provide the location-based digital stories. The

    smart phone

    300 is one example of a mobile device that includes a memory, such as image/

    data memory

    330, which serves as a memory for storing digital story content, output devices including a

    color display

    332 and a

    speaker

    344 for outputting digital story content, a

    GPS receiver

    360 which serves as a location determination unit, a

    digital processor

    320 which serves as a data processing system, and a

    firmware memory

    328 which serves as a program memory. The

    digital processor

    320 is communicatively connected to the image/

    data memory

    330, the

    color display

    332, the

    speaker

    344 via the

    audio codec

    340, and the

    firmware memory

    328.

  • In this example, the instructions provided in the app can control the

    digital processor

    320 in order to store data for providing a plurality of location-specific digital stories related to a common theme at a plurality of locations in the image/

    data memory

    330. The instructions provided in the app can also be used by the

    digital processor

    320 to determine if the current location of the

    mobile phone

    300, provided by

    GPS receiver

    360, corresponds to one of the plurality of locations for the location-specific digital stories.

  • The instructions provided in the app can also be used by the

    digital processor

    320 to determine if the user of the mobile device has already viewed one of the plurality of location-specific digital stories at a different one of the plurality of locations. The instructions provided in the app can also be used by the

    digital processor

    320 to read digital story content data for a first digital story from the image/

    data memory

    330 and to provide the first digital story content data to one or more of the output devices in the

    smart phone

    300, such as

    color display

    332 and

    speaker

    344, if it is determined that the

    smart phone

    300 has not been used to present one of the plurality of location-specific digital stories at a different one of the plurality of locations. The instructions provided in the app can also be used by the

    digital processor

    320 to read digital story content data for a second digital story from image/

    data memory

    330 and to provide the second digital story content data to one or more of the output devices in the

    smart phone

    300, if it is determined that the

    smart phone

    300 has been used to present one of the plurality of location-specific digital stories at a different one of the plurality of locations.

  • In some embodiments, digital still images or digital video images captured by the user's

    smart phone

    300 during a digital story presentation are included in a photo product which is produced by

    fulfillment provider

    270 in

    FIG. 2

    .

    FIG. 11

    depicts a photo postcard product which includes a user-captured image and pre-stored information, according to some embodiments of the present invention.

    FIG. 11

    depicts a

    photo postcard

    700, which is one type of printed photo product which can be provided by

    fulfillment provider

    270. The

    photo postcard

    700 includes a

    title section

    705 which includes a

    historical character image

    710 and the title “Pathways Through History”.

    Title section

    705 is provided using pre-stored images and other information stored in

    content database

    290 at the service provider 280 in

    FIG. 2

    . The

    photo postcard

    700 also includes a

    main image

    715, which was captured by the user during the digital story presentation, as a result of instructions given to the user during the digital story presentation. The

    main image

    715 depicts a modified

    user character

    720, which includes the user's head but which has been clothed in period clothing using augmented reality techniques. The

    main image

    715 also depicts historical figures 722 from the digital story presentation, who have been added to the

    main image

    715 using augmented reality techniques.

  • The

    photo postcard

    700 also includes an

    unmodified user photo

    730, along with customized

    text

    735 “Andrea's Fight for Rights”.

    Customized text

    735 provides the user's name, which has been automatically added by using the name or nickname in the user profile.

  • The

    back view

    700B of the

    photo postcard

    700 includes a

    postage section

    740. In some embodiments, a postal stamp is affixed to the

    postage section

    740. In other embodiments, a custom stamp providing an image associated with the theme of the digital story experience is printed in the

    postage section

    740, as part of a customized postage stamp. The

    back view

    700B also includes an

    address section

    745, which provides the mailing address of the recipient. In some embodiments, the

    address section

    745 is automatically filled out when the user selects a recipient from the address book stored in their

    smart phone

    300, or from a list of friends addresses stored as part of their user profile.

  • The

    back view

    700B of the

    photo postcard

    700 also includes a

    message section

    750. In some embodiments, the text message in the

    message section

    750 is automatically derived from pre-stored information and user responses provided by the user during the digital story presentation. In some embodiments, a portion of the text message in the

    message section

    750 is provided by speech to text conversion of the user's spoken comments which have been converted to digital audio signals by

    mic

    342 and

    audio codec

    340 in

    smart phone

    300, and converted to text by

    digital processor

    320 in

    smart phone

    300 or by

    processor

    292 after the digital audio signals have been uploaded to service provider 280.

  • In some embodiments, one or more user captured images can be modified and composited with pre-stored information. For example, the

    processor

    292 in the

    computer system

    286 can process a user captured image in order to crop out a face of a person depicted in the image, convert the face from a color to a monochrome image, and composite the image of the face into one of a plurality of pre-stored newspaper templates, so that the user captured image appears to be a photograph in a historic newspaper related to the theme of the digital story. In some embodiments, the newspaper text can be modified based on text entered by the user of the

    smart phone

    300. For example, the headline of the newspaper can read “Andrea and Declan fight for rights”

  • In some embodiments, the service provider 280 provides advertisements or coupons specific to the digital story over the

    communication network

    250 to the

    smart phone

    300. In some embodiments, one or more user captured images can be modified and composited with pre-stored information in order to create the advertisements or coupons.

  • In some embodiments, a particular advertisement is selected from a plurality of possible advertisements based on various criteria. The criteria can include, for example, demographic information such as the approximate age of the user, as stored in the user profile, or the approximate age of one or more of the persons depicted in the captured digital image. For example, if the captured digital image includes one or more children, the particular advertisement can be for an age-appropriate book or toy related to the theme of the digital story.

  • The criteria can also include travel route related information, so that the advertisements relate to businesses the user is likely to pass on their trip to the next experience location, or to their vacation destination.

  • The criteria can also include weather related information such as the current temperature. For example, on warm days the advertisement can provide an offer related to a discount on an ice cream cone at a first nearby merchant along the travel route, and on cold days the advertisement can provide an offer related to a discount on a hot drink at second nearby merchant. In some embodiments, the coupons can be for a limited time period, based on the date and time ambient condition information. In some embodiments, the coupons can customized so that they can only be used by the particular user of the

    smart phone

    300. This can be done, for example, by including one of the digital images captured by the user, as part of the coupon.

  • In some embodiments, the

    processor

    292 analyzes metadata associated with the user captured digital images, to determine whether the analyzed images were captured within predetermined areas associated with particular location-specific digital stories.

  • In some embodiments, the processor can analyze the pixel data of the user captured digital images to determine if the images also include a particular object (e.g. a certain building, or a certain type of signpost).

  • In some embodiments, the

    processor

    282 performs additional analysis on the pixel data of the user captured images, in order to determine the quality of the images. For example, a number of user captured images can be evaluated to select a subset of images which contain the best composition or pose (e.g. the best looking smile), or which provide the best exposed or focused images.

  • The pre-stored information can include images, graphics, text, or templates. If the photo product to be produced is a digital photo product, such as a video slide show or digital video clip, the pre-stored information can include audio information such as voice narration tracks or music tracks, or video information such as video clips describing the historic site, or video special effects templates or tracks.

  • In another embodiment of the present invention, the user of

    mobile phone

    300 can be the driver or passenger in a vehicle which is driving along a route. The route can be, for example, a scenic or historic route, such as historic Route 66 in California, the scenic Seaway Trail along Lake Ontario in upstate New York, or

    Routes

    5 and 20 along the Finger Lakes in upstate New York. Vehicles can begin their journeys at various points along the route, and can drive in at least two alternate directions (e.g. west to east, or east to west). Therefore,

    system

    214 is designed to provide digital stories which are appropriate for the user's route, no matter where they begin along the route, or which direction they follow.

  • FIG. 12

    is a flow diagram depicting steps for providing travel direction-dependent digital stories at a plurality of different locations, according to some embodiments of the present invention. In some embodiments, the steps are performed by the service provider 280 in

    FIG. 2

    . In other embodiments, some or all of the steps are performed by

    smart phone

    300 in

    FIG. 3

    .

  • In store direction-dependent stories step 850 of

    FIG. 12

    , data for a plurality of direction-dependent and location-specific digital stories are stored on a network-accessible storage device system, such as

    content database

    290 in

    FIG. 2

    . In some embodiments, the digital stories are stored in association with GPS information, such as geofences, which indicate the locations where the digital stories are to be presented as well as one or more travel directions associated with particular digital stories. For at least some of the locations, two or more different digital stories are stored, each associated with a different travel direction. This is done because some travelers will be driving in a first direction (e.g. an east to west direction) while other travelers will be driving in the opposite direction (e.g. a west to east direction). At least some of the digital stories provide messages that are travel direction-dependent and need to be presented at the appropriate time, and with the appropriate content, for the travel direction. For example, the message “coming up on your right is Naked Dove Brewing Company, the perfect place to learn about beer” is appropriate for east to west travelers, if it presented starting about ¼ mile east of the Naked Dove Brewing Company location. However, for west to east travelers this message is not suitable, and a different message (e.g. “coming up on your left is Naked Dove Brewing Company, the perfect place to learn about beer” needs to be presented starting at a location which is about ¼ mile west of the Naked Dove Brewing Company location.

  • In some embodiments, direction-dependent, location-specific digital stories are stored for a plurality of different themes or categories in the

    content database

    290. For example, the plurality of themes or categories for users in vehicles driving along

    Route

    5 and 20 could include themes related to the “Fight for Rights” and “Wineries in the Finger Lake region” and categories such as “Best places to eat” and “Fun stops for kids”. In some embodiments, the location-dependent digital stories are stored in a memory of

    smart phone

    300, such as image/

    data memory

    330 or

    firmware memory

    328, by downloading an app from the service provider 280. The app can then be selected by the user of

    smart phone

    300, and the instructions provided by the app can be executed by

    digital processor

    320, in order to perform the steps depicted in

    FIG. 12

    .

  • In store user preferences step 855 of

    FIG. 12

    , preferences are developed for a plurality of users of mobile devices, such as the users of

    smart phones

    300A, 300B in

    FIG. 2

    , and stored in customer database 288. The preferences can indicate an interest in specific topics, such as history or art, or in specific types of visitor attractions, such as antique shops, wineries or microbreweries. In some embodiments, user profiles are also stored, as described earlier in reference to step 405 of

    FIG. 4

    . The user preferences indicate whether the user has already been presented with one or more digital stories. If a user has been presented with a digital story, the theme, location, and date/time can be recorded in their user profile. In some embodiments, the user can also indicate a current need, such as the need to locate a relatively nearby restaurant, gas station, or rest room.

  • In determine

    user location step

    410, the current location of the mobile device for a particular user is determined, as was described earlier in reference to step 410 of

    FIG. 4

    . This can be done, for example, by using the

    GPS receiver

    360 in the smart phone 300 (see

    FIG. 3

    ) to determine the GPS coordinates of the

    smart phone

    300, and by using the

    digital processor

    320 in the smart phone to communicate the GPS coordinates to the service provider 280 using the

    wireless modem

    350.

  • In at an

    experience location test

    415, a determination is made as to whether the user's current location corresponds to one of the plurality of locations at which location-based digital stories can be provided by system 214 (yes to test 415) or is outside this plurality of experience locations (no to test 415), as described earlier in reference to test 415 in

    FIG. 4

    . This can be tested by determining, for example, if the user's

    smart phone

    300 has entered into the geofence for any of the experience locations.

  • FIG. 13

    is an example of a

    map

    800 depicting a plurality of different experience locations and travel directions at which travel direction-dependent digital stories can be provided. In this example, four different geofences, 810, 820, 830, and 840 are positioned at different locations in the Canandaigua, N.Y. area.

    Geofence

    810 is located in the central downtown area, and is associated with a digital story related to the history of Ontario County, which does not depend on the direction of travel. The other three

    geofences

    820, 830, and 840 are associated with direction-dependent digital stories. For example,

    geofence

    840 is associated with different digital stories that correspond to

    southern travel direction

    842,

    western travel direction

    844, and

    eastern travel direction

    846. One particular digital story associated with

    southern travel direction

    842 can relate, for example, to the CMAC performing arts center which the vehicle will soon approach on its left side, or with the Canandaigua Country Club which the vehicle will pass on its right side. One particular digital story associated with

    western travel direction

    844 can relate, for example, to the history of Canandaigua Lake, which the vehicle will soon approach on its left side.

  • One particular digital story associated with

    eastern travel direction

    846 can relate, for example, to festivals or other events which are taking place in Geneva, N.Y., which is the next major community along

    Route

    5 and 20 to the East. In some embodiments, the digital stories describing these festivals and other events are stored in association with information defining the particular time period (e.g. dates and times) of these events, and is only presented during the particular time period that the events are taking place. In some embodiments, the digital stories describing these events, and the information defining the particular time period of these events, is managed and updated by content curators responsible for the events, using

    computer

    218 in

    FIG. 3

    , so that up-to-date information for the events is stored in

    content database

    290.

  • Geofence

    820 is associated with different digital stories that correspond to

    northern travel direction

    822,

    southwestern travel direction

    826, and eastern travel direction 824. One particular digital story associated with

    northern travel direction

    822 can relate, for example, to a business which matches the user's preferences stored in the user profile, as described earlier in relation to step 855. For example, if the user has indicated an interest in antiques, the digital story presented to this user can relate to a particular antique shop located along Main Street in downtown Canandaigua. If the vehicle is following

    southwestern travel direction

    826 and the user has indicated a current need for a gas station, the digital story can relate to one or more nearby gas stations that the user will pass while traveling west along

    Route

    5 and 20. One particular digital story associated with eastern travel direction 824 can relate, for example, to the history of Canandaigua Lake, which the vehicle is passing on the right. In this example, the particular digital story associated with eastern travel direction 824 can be similar to the digital story associated with

    western travel direction

    844, described earlier in relation to

    geofence

    840.

  • Geofence

    830 is associated with different digital stories that correspond to

    northeastern travel direction

    832,

    southwestern travel direction

    836, and

    southern travel direction

    834. The particular digital stories associated with

    northeastern travel direction

    832 can include, for example, two or more different digital stories related to the same venue which is located along the current travel route. For example, the stored direction-dependent digital stories for the New York Wine and Culinary Center can include a first story related to the wine tastings offered at the Center, a second story related to the restaurant located at the Center, and a third story related to the rest room facilities located at the Center. One or more of these three digital stories for the New York Wine and Culinary Center can be presented to users in vehicles headed in the

    northeastern travel direction

    832, depending on the user's preferences and needs, as described earlier in relation to step 855.

  • One of the digital stories associated with

    southwestern travel direction

    836 can be, for example, a general story describing the history of

    Route

    5 and 20, which started as foot trails established by Native Americans thousands of years ago, and later became part of the transcontinental federal highway between Boston and Newport, Oreg. In some embodiments, this particular digital story can be associated with many different geofences along

    Route

    5 and 20, and can be presented only once, when the user's vehicle first enters one of the geofences associated with this general story. This permits the user to learn about the general history of

    Route

    5 and 20 soon after they begin their journey, but during a time when there are no other attractions or traffic stops to interrupt the story. By updating the user profile after this general story is presented, as described earlier in relation to step 440 of

    FIG. 4

    , the general story will not be repeated when the user's vehicle enters the other geofences associated with the same general story during the same trip.

  • The digital stories associated with

    travel direction

    834 can include, for example, two different digital stories related to the 1837 Cobblestone Cottage Bed and Breakfast. The first digital story can describe the general history of the 1837 Cobblestone Cottage, and the second digital story can describe specific accommodations, such as a room type and room rate. In this example, the second digital story is presented if the user has indicated a user preference for bed and breakfast types of accommodations and if the 1837 Cobblestone Cottage Bed and Breakfast currently has a vacant guest room. If not, the first digital story is presented. In some embodiments, this vacancy information is updated by a content curator responsible for the 1837 Cobblestone Cottage Bed and Breakfast venue, using

    computer

    218, so that up-to-date information is stored in

    content database

    290.

  • It will be understood that in some embodiments, at some experience locations a digital story could be presented only if the travel direction is determined to be a particular travel direction (e.g. Northern travel direction 822), otherwise a digital story would not be presented. It will also be understood that in some embodiments, at some experience locations a particular digital story could be presented only if it was determined that the travel direction at a specified location prior to entering the experience location was determined to be a particular direction.

  • From the above description, it will be understood that multiple digital stories can be associated with some travel directions, and can be automatically presented based on user preferences, user needs, and currently available events and accommodations.

  • Returning to

    FIG. 12

    , if the user's current location does not correspond to one of the plurality of story-telling experience locations (no to test 415), the process proceeds to provide

    general content step

    890. In provide

    general content step

    890, various types of content can be provided for the enjoyment of the user. In some embodiments, the content is music (e.g. mp3 files) previously stored by the user on their

    smart phone

    300, or provided by a music streaming service such as Pandora™ Internet Radio. In some embodiments, the music is muted or paused when digital stories are presented in

    present story step

    435, and automatically resumed when the digital story presentation is completed. In some embodiments, the general content can include a digital map showing the vehicle's current location. In some embodiments, standard mapping programs, such as Google Maps, already installed on the

    smart phone

    300 can be used to provide general map content.

  • If the user's current location corresponds to one of the plurality of experience locations (yes to test 415), the process proceeds to determine

    travel direction step

    860. In some embodiments, the travel direction is determined by comparing recent GPS readings from

    GPS receiver

    360 in

    smart phone

    300, in order to determine, for example, whether the travel direction for a vehicle which has entered

    geofence

    820 in

    FIG. 13

    is

    northern travel direction

    822,

    southwestern travel direction

    826, or eastern travel direction 824. In some embodiments, the travel direction is determined, at least in part, by determining which other geofences the vehicle has recently exited. For example, if the vehicle has recently exited

    geofence

    830, the current travel direction cannot be

    southwestern travel direction

    826.

  • In some embodiments, multiple adjacent geofences can be used to determine the travel direction for at least some digital story experience locations.

    FIG. 14

    is an example of a

    map

    880 depicting a plurality of

    adjacent geofences

    870, 872, and 874 which can be used to determine a travel direction. For example, if the current location of the

    mobile device

    300 is within the

    geofence

    870 associated with a location-specific digital story, the travel direction can be determined to be an

    eastern travel direction

    876 if the mobile device has recently been located within

    geofence

    874, which is just west of

    geofence

    870. Similarly, the travel direction can be determined to be a

    western travel direction

    878 if the mobile device has very recently been located within

    geofence

    872, which is just east of

    geofence

    870. In some embodiments, if the current location of the

    mobile device

    300 is within the

    geofence

    870 associated with a location-specific digital story and the mobile device has not recently been located within

    geofence

    874, the travel direction can be determined to be a

    western travel direction

    878, since the only possible travel directions are

    eastern travel direction

    876 and

    western travel direction

    878.

  • Returning to

    FIG. 12

    , in select direction-

    dependent story step

    865, one of a plurality of possible digital stories is selected based on the travel direction. As described earlier in relation to

    FIG. 13

    , in addition to the travel direction, in some embodiments the digital story can be selected responsive to stored user preferences and current user needs. In some embodiments, the selection is performed using the

    processor

    292 at the service provider 280, and the selected digital story is transferred to the

    smart phone

    300 over the

    communication network

    250. In some embodiments, the selection is performed using the

    digital processor

    320 in

    smart phone

    300, which selects one of the location-dependent digital stories previously stored in image/

    data memory

    330 or

    firmware memory

    328. In some embodiments, the location-dependent digital stories are stored in image/

    data memory

    330 when an app is downloaded from the service provider 280 to the

    firmware memory

    328, as described earlier in reference to store direction-dependent stories step 850.

  • In

    present story step

    435, the direction-dependent digital story selected in

    step

    865 is presented to the user. As described earlier in relation to step 435 in

    FIG. 4

    , the digital story can be presented to the user using a variety of story-telling methods, such as audio stories, text-based stories, video stories, and augmented-reality stories. In some embodiments, the user selects a preferred story-telling method from a menu offering a variety of choices, in store user preferences step 855. For example, if the driver of the vehicle is interested in the digital story, the digital story can be presented using audio narration, sound effects, and music. If only a single passenger is interested in the digital story, the digital story can be presented using text and images, which can be read and viewed by the passenger without disturbing the driver of the vehicle. If the

    smart phone

    300 has been presenting general content, such as music, as described earlier in relation to provide

    general content step

    890, the general content is stopped, muted or paused while the story is presented in

    present story step

    435, and then automatically resumed after the story is presented.

  • In update

    user profile step

    440, the user profile is updated based on the digital story presented to the user, as described earlier in relation to update

    user profile step

    440 of

    FIG. 4

    . For example, the user profile can be updated to indicate that the user has been presented with a specific direction-dependent digital story. In some embodiments, if the digital story related to a specific venue, such as an antique shop or a winery, the

    system

    214 determines if the user stopped at the venue, and if the user did stop, the

    system

    214 determines the approximate period of time that the user spent at the venue. In some embodiments, the

    GPS receiver

    360 in the

    mobile phone

    300 is used in determining if the user stopped at the venue, and the approximate time period of the stop. This can be done, for example, by determining the time period during which the GPS location of the

    mobile phone

    300 was approximately equal to the GPS location of the venue.

  • In the foregoing detailed description, the methods and apparatuses of the present invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes can be made thereto without departing from the broader scope of the present invention. The present specification and figures are accordingly to be regarded as illustrative rather than restrictive.

  • A computer program product can include one or more non-transitory storage mediums, for example; magnetic storage media such as magnetic disk (such as a floppy disk) or magnetic tape; optical storage media such as optical disk, optical tape, or machine readable bar code; solid-state electronic storage devices such as random access memory (RAM), flash EPROM memory, or read-only memory (ROM); or any other physical device or media employed to store a computer program having instructions for controlling one or more computers to practice any of the methods according to any embodiment of the present invention.

  • PARTS LIST
    • 100 System
    • 110 Processor-Accessible Memory Device System
    • 120 Data Processing Device System
    • 130 Data Input-Output Device System
    • 214 System
    • 218 Computer
    • 220 Communication Services Provider (CSP)
    • 222 Communication Services Provider (CSP)
    • 224 Kiosk Printer
    • 240 Cellular Provider Network
    • 250 Communication Network
    • 268 Transportation Vehicle
    • 270 Fulfillment Provider
    • 272 Web Server
    • 274 Commerce Manager
    • 275 Fulfillment Manager
    • 276 Fulfillment Manager
    • 278 Digital Printer
    • 279 DVD Writer
    • 280 Service Provider
    • 282 Web Server at Service Provider
    • 284 Account Manager
    • 286 Computer System
    • 288 Customer Database
    • 290 Content database
    • 292 Processor
    • 300A Smart phone at location A
    • 300B Smart phone at location B
    • 300 Smart phone
    • 302 Flash
    • 304 Lens
    • 310 CMOS Image Sensor
    • 312 Timing Generator
    • 314 Image Sensor Array
    • 316 A/D Converter
    • 318 DRAM Buffer Memory
    • 320 Digital Processor
    • 322 RAM
    • 324 Real Time Clock
    • 328 Firmware Memory
    • 330 Image/Data Memory
    • 332 Color Display
    • 334 User Controls
    • 340 Audio Codec
    • 342 Microphone
    • 344 Speaker
    • 350 Wireless Modem
    • 352 RF Channel
    • 360 GPS Receiver
    • 362 Dock Interface
    • 364 Dock Recharger
    • 400 Store Location Specific Stories Step
    • 405 Store User Profiles Step
    • 410 Determine User Location Step
    • 415 At An Experience Location Test
    • 420 Provide Directions Step
    • 425 Select Location-Specific Story Step
    • 435 Present Story Step
    • 440 Update User Profile Step
    • 450 Suggest Next Location Step
    • 500 Access User Profile Step
    • 505 Viewed Earlier Story Test
    • 510 Provide First Digital Story Step
    • 515 Introduce Theme & Character Step
    • 520 Provide Second Digital Story Step
    • 525 Continue Theme & Character Step
    • 530 Map
    • 531 Experience Location “1”
    • 532 Experience Location “2”
    • 533 Experience Location “3”
    • 612 Story Greeting Window
    • 614 User Photo
    • 616 Message Window
    • 620 Story Selection Window
    • 622 Wine Story Image
    • 624 Wine Story
    • 632 Amelia Story Image
    • 634 Amelia Story
    • 640 Story Introduction Window
    • 642 Map
    • 646 Follow Amelia Icon
    • 648 Another Theme Icon
    • 660 First Location-Specific Digital Story
    • 662 Scene
    • 664 Statue of Amelia
    • 670 Second Location-Specific Digital Story
    • 680 Third Location-Specific Digital Story
    • 682 House
    • 690 Fourth Location-Specific Digital Story
    • 692 Sign
    • 700 Photo Postcard
    • 700B Back View of Photo Postcard
    • 705 Title Section
    • 710 Historical Character Image
    • 715 Main Image
    • 720 Modified User Character
    • 722 Historical Figures.
    • 730 Unmodified User Image
    • 735 Customized Title
    • 740 Postage Section
    • 745 Address section
    • 750 Message section
    • 800 Map
    • 810 Geofence
    • 820 Geofence
    • 822 Northern Direction
    • 824 Eastern Direction
    • 826 Southwestern Direction
    • 830 Geofence
    • 832 Northeastern Direction
    • 834 Southern Direction
    • 836 Western Direction
    • 840 Geofence
    • 842 Southern Direction
    • 844 Western Direction
    • 846 Eastern Direction
    • 850 Store Direction-Dependent Stories Step
    • 855 Store User Preferences Step
    • 860 Determine Travel Direction Step
    • 865 Select Direction-Dependent Story Step
    • 870 Geofence
    • 872 Geofence
    • 874 Geofence
    • 876 Eastern direction
    • 878 Western direction
    • 880 Map
    • 890 Provide General Content Step

Claims (20)

What is claimed is:

1. A method executed by a data processing device system, the method comprising the steps of:

storing, in a processor-accessible memory device system communicatively connected to the data processing device system, a user profile associated with a user;

storing, in the processor-accessible memory device system, data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations;

determining whether or not a current location of a mobile device associated with the user corresponds to one of the plurality of locations related to the plurality of location-specific digital stories;

determining, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, and based at least on an analysis of the user profile, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories at a different one of the plurality of locations different than the first one;

providing a first digital story to the mobile device in response to it being determined that the first case exists; and

providing a second digital story to the mobile device in response to it being determined that the second case exists.

2. The method of

claim 1

, wherein the first digital story introduces the common theme, and the second digital story continues the common theme, which was introduced at the different one of the plurality of locations.

3. The method of

claim 1

, wherein user profiles for a plurality of users, and the data, are stored by a network-accessible storage system.

4. The method of

claim 1

, further comprising the step of providing general content to the mobile device if it is determined that the current location of the mobile device does not correspond to one of the plurality of locations related to the plurality of location-specific digital stories.

5. The method of

claim 1

, wherein at least some of the plurality of location-specific digital stories are associated with particular travel directions, and the method further comprises determining a travel direction of the mobile device and providing the first digital story or the second digital story in response to the determined travel direction.

6. The method of

claim 5

, wherein the user profile includes at least one user preference, and the method further comprises selecting the first digital story from a plurality of stored first digital stories in response to the stored user preference, the plurality of stored first digital stories related to the first one of the plurality of locations.

7. The method of

claim 6

, wherein the user preference includes a language preference, and the method further comprises accessing the stored user profile to determine the language preference and selecting the first digital story from the plurality of stored first digital stories in response to the determining of the language preference.

8. The method of

claim 6

, wherein the user preference includes a demographic group of the user, and the method further comprises accessing the stored user profile to determine the language preference and selecting the first digital story from the plurality of stored first digital stories in response to the determining of the demographic group.

9. The method of

claim 1

, wherein the first digital story, the second digital story, or both is or are configured to instruct the user to capture one or more digital images, and the method further comprises requesting a photo product related to the common theme, the photo product defined to incorporate at least one digital image captured by the user.

10. A mobile device comprising:

a) a memory device system story content data;

b) an output device system;

c) a location determination unit configured to determine a geographic location of the mobile device; and

d) a data processing device system communicatively connected to the output device system, the memory device system, and the location determination unit, wherein the memory device system stores program instructions configured to cause the data processing system at least to:

store, in the memory device system, data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations;

determine whether or not a current location of the mobile device, which is provided by the location determination unit, corresponds to one of the plurality of locations related to the plurality of location-specific digital stories;

determine, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories at a different one of the plurality of locations different than the first one;

acquire, from the memory device system, first digital story content data of the digital story content data and provide the first digital story content data to the output device system in response to it being determined that the first case exists; and

acquire, from the memory device system, second digital story content data of the digital story content data and provide the second digital story content data to the output device system in response to it being determined that the second case exists.

11. The mobile device of

claim 10

, wherein the output device system includes an image display, a speaker, an audio output jack, or a combination thereof.

12. The mobile device of

claim 10

, wherein the first digital story content data, the second digital story content data, or both include(s) audio content data, and the program instructions are further configured to cause the data processing device system at least to provide music content data to the output device system when the current location of the mobile device does not correspond to one of the plurality of locations.

13. The mobile device of

claim 10

, wherein the first digital story content data, the second digital story content data, or both is or are configured to instruct the user to capture one or more digital images using the mobile device.

14. The mobile device of

claim 10

, wherein at least some of the plurality of location-specific digital stories are associated with particular travel directions, and the program instructions are further configured to cause the data processing device system at least to:

determine, based at least on input from the location determination unit, a travel direction of the mobile device; and

provide the first digital story content data or the second digital content data in response to the determined travel direction.

15. A system comprising:

a memory device system storing a user profile associated with a user of a mobile device;

a network-accessible storage device system storing data indicating a plurality of location-specific digital stories related to a common theme at a plurality of locations;

a location determination unit configured to determine a geographic location of the mobile device;

a data processing device system configured at least to:

determine whether or not a current location of the mobile device, which is provided by the location determination unit, corresponds to one of the plurality of locations related to the plurality of location-specific digital stories;

determine, if it is determined that the current location of the mobile device corresponds to a first one of the plurality of locations, and based at least on an analysis of the user profile, that either a first case or a second case exists indicating that the user has or has not, respectively, been presented with at least one of the plurality of location-specific digital stories by the mobile device at a different one of the plurality of locations different than the first one;

provide a first digital story of the plurality of location-specific digital stories stored by the network-accessible storage device system to the mobile device in response to it being determined that the first case exists; and

provide a second digital story of the plurality of location-specific digital stories stored by the network-accessible storage device system to the mobile device in response to it being determined that the second case exists.

16. The system of

claim 15

, wherein the data processing device system is configured to automatically determine whether or not to provide the first digital story using audio data, based on measurements performed by the mobile device.

17. The system of

claim 15

, wherein the memory device system, which stores the user profile, also stores user profiles for a plurality of users and is at least part of the network-accessible storage device system.

18. The system of

claim 15

, wherein the first digital story, the second digital story, or both, (a) include(s) an augmented reality image of a historical character, and (b) is or are configured to cause the mobile device to display the augmented reality image of the historical character along with an image captured by the mobile device.

19. The system of

claim 15

, wherein at least some of the plurality of location-specific digital stories are associated with particular travel directions, and the data processing device system is further configured at least to:

determine a travel direction of the mobile device; and

provide the first digital story or the second digital story in response to the determined travel direction.

20. The method of

claim 1

, further comprises the step of suggesting a location for a next digital story in response to answers provided by the user during the first digital story or the second digital story.

US14/219,901 2013-03-22 2014-03-19 System, method and device for providing personalized mobile experiences at multiple locations Abandoned US20140287779A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/219,901 US20140287779A1 (en) 2013-03-22 2014-03-19 System, method and device for providing personalized mobile experiences at multiple locations
US15/151,448 US10136260B2 (en) 2013-03-22 2016-05-10 Selectively providing mobile experiences at multiple locations

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361804608P 2013-03-22 2013-03-22
US14/219,901 US20140287779A1 (en) 2013-03-22 2014-03-19 System, method and device for providing personalized mobile experiences at multiple locations

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/151,448 Continuation US10136260B2 (en) 2013-03-22 2016-05-10 Selectively providing mobile experiences at multiple locations

Publications (1)

Publication Number Publication Date
US20140287779A1 true US20140287779A1 (en) 2014-09-25

Family

ID=51569513

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/219,901 Abandoned US20140287779A1 (en) 2013-03-22 2014-03-19 System, method and device for providing personalized mobile experiences at multiple locations
US15/151,448 Active 2034-05-04 US10136260B2 (en) 2013-03-22 2016-05-10 Selectively providing mobile experiences at multiple locations

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/151,448 Active 2034-05-04 US10136260B2 (en) 2013-03-22 2016-05-10 Selectively providing mobile experiences at multiple locations

Country Status (1)

Country Link
US (2) US20140287779A1 (en)

Cited By (172)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9037509B1 (en) * 2012-04-25 2015-05-19 Wells Fargo Bank, N.A. System and method for a mobile wallet
US20150192988A1 (en) * 2014-01-06 2015-07-09 Hristo Aleksiev Augmented Reality System Incorporating Transforming Avatars
US9293132B2 (en) * 2014-08-06 2016-03-22 Honda Motor Co., Ltd. Dynamic geo-fencing for voice recognition dictionary
US20160088546A1 (en) * 2014-09-19 2016-03-24 Intel Corporation Regulation via geofence boundary segment crossings
WO2016069668A1 (en) * 2014-10-31 2016-05-06 Microsoft Technology Licensing, Llc User interface functionality for facilitating interaction between users and their environments
US9418056B2 (en) * 2014-10-09 2016-08-16 Wrap Media, LLC Authoring tool for the authoring of wrap packages of cards
US9442906B2 (en) * 2014-10-09 2016-09-13 Wrap Media, LLC Wrap descriptor for defining a wrap package of cards including a global component
US20160284112A1 (en) * 2015-03-26 2016-09-29 Wrap Media, LLC Authoring tool for the mixing of cards of wrap packages
JP2017033340A (en) * 2015-08-03 2017-02-09 富士通株式会社 Information distribution method, information distribution program, and information distribution device
US9600803B2 (en) 2015-03-26 2017-03-21 Wrap Media, LLC Mobile-first authoring tool for the authoring of wrap packages
US9600449B2 (en) 2014-10-09 2017-03-21 Wrap Media, LLC Authoring tool for the authoring of wrap packages of cards
US20170111542A1 (en) * 2014-04-02 2017-04-20 Canon Europa N.V. Print system, management server, client, method of operating a print system, method of operating a management server and method of operating a client
US20170123750A1 (en) * 2015-10-28 2017-05-04 Paypal, Inc. Private virtual object handling
US9752883B1 (en) * 2014-06-04 2017-09-05 Google Inc. Using current user context to determine mapping characteristics
US20170295250A1 (en) * 2016-04-06 2017-10-12 Snapchat, Inc. Messaging achievement pictograph display system
US9825898B2 (en) 2014-06-13 2017-11-21 Snap Inc. Prioritization of messages within a message collection
US9843720B1 (en) 2014-11-12 2017-12-12 Snap Inc. User interface for accessing media at a geographic location
US20170374003A1 (en) 2014-10-02 2017-12-28 Snapchat, Inc. Ephemeral gallery of ephemeral messages
US9858943B1 (en) 2017-05-09 2018-01-02 Sony Corporation Accessibility for the hearing impaired using measurement and object based audio
US9866999B1 (en) 2014-01-12 2018-01-09 Investment Asset Holdings Llc Location-based messaging
US9881094B2 (en) 2015-05-05 2018-01-30 Snap Inc. Systems and methods for automated local story generation and curation
US20180146134A1 (en) * 2014-02-21 2018-05-24 Colorvision International, Inc. Portable electronic device with a creative artworks picture application operating in response to beacons
US10051331B1 (en) * 2017-07-11 2018-08-14 Sony Corporation Quick accessibility profiles
US10102680B2 (en) 2015-10-30 2018-10-16 Snap Inc. Image based tracking in augmented reality systems
US10123166B2 (en) 2015-01-26 2018-11-06 Snap Inc. Content request by location
US10154192B1 (en) 2014-07-07 2018-12-11 Snap Inc. Apparatus and method for supplying content aware photo filters
US10157449B1 (en) 2015-01-09 2018-12-18 Snap Inc. Geo-location-based image filters
US10165402B1 (en) 2016-06-28 2018-12-25 Snap Inc. System to track engagement of media items
WO2019013008A1 (en) * 2017-07-14 2019-01-17 株式会社小松製作所 Vehicle management device, vehicle management method, and program
US10203855B2 (en) 2016-12-09 2019-02-12 Snap Inc. Customized user-controlled media overlays
US10219111B1 (en) 2018-04-18 2019-02-26 Snap Inc. Visitation tracking system
US10223397B1 (en) 2015-03-13 2019-03-05 Snap Inc. Social graph based co-location of network users
US10303427B2 (en) 2017-07-11 2019-05-28 Sony Corporation Moving audio from center speaker to peripheral speaker of display device for macular degeneration accessibility
US10319149B1 (en) 2017-02-17 2019-06-11 Snap Inc. Augmented reality anamorphosis system
US10327096B1 (en) 2018-03-06 2019-06-18 Snap Inc. Geo-fence selection system
US10334307B2 (en) 2011-07-12 2019-06-25 Snap Inc. Methods and systems of providing visual content editing functions
US10348662B2 (en) 2016-07-19 2019-07-09 Snap Inc. Generating customized electronic messaging graphics
US10354425B2 (en) 2015-12-18 2019-07-16 Snap Inc. Method and system for providing context relevant media augmentation
US10387514B1 (en) 2016-06-30 2019-08-20 Snap Inc. Automated content curation and communication
US10387730B1 (en) 2017-04-20 2019-08-20 Snap Inc. Augmented reality typography personalization system
US10402867B2 (en) * 2014-03-25 2019-09-03 Nanyang Technological University Computerized method and system for personalized storytelling
US10423983B2 (en) 2014-09-16 2019-09-24 Snap Inc. Determining targeting information based on a predictive targeting model
US10430838B1 (en) 2016-06-28 2019-10-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections with automated advertising
WO2019210172A1 (en) * 2018-04-27 2019-10-31 Sweny Tim System and method for evaluating and reserving rooms
US10474321B2 (en) 2015-11-30 2019-11-12 Snap Inc. Network resource location linking and visual content sharing
US10499191B1 (en) 2017-10-09 2019-12-03 Snap Inc. Context sensitive presentation of content
US20190373055A1 (en) * 2018-05-31 2019-12-05 Fuji Xerox Co., Ltd. Information processing apparatus and non-transitory computer readable medium storing program
US10523625B1 (en) 2017-03-09 2019-12-31 Snap Inc. Restricted group content collection
US20200045300A1 (en) * 2017-05-22 2020-02-06 Fyusion, Inc. Inertial measurement unit progress estimation
US10567906B1 (en) * 2018-11-28 2020-02-18 International Business Machines Corporation User adapted location based services
US10572681B1 (en) 2014-05-28 2020-02-25 Snap Inc. Apparatus and method for automated privacy protection in distributed images
US10580458B2 (en) 2014-12-19 2020-03-03 Snap Inc. Gallery of videos set to an audio time line
US10616239B2 (en) 2015-03-18 2020-04-07 Snap Inc. Geo-fence authorization provisioning
US10614828B1 (en) 2017-02-20 2020-04-07 Snap Inc. Augmented reality speech balloon system
WO2020070655A1 (en) * 2018-10-05 2020-04-09 Qua Qua Experiences Pvt. Ltd. System and method for creating personalized story
US10623666B2 (en) 2016-11-07 2020-04-14 Snap Inc. Selective identification and order of image modifiers
US10638256B1 (en) 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
US10650702B2 (en) 2017-07-10 2020-05-12 Sony Corporation Modifying display region for people with loss of peripheral vision
US10657708B1 (en) 2015-11-30 2020-05-19 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US10679389B2 (en) 2016-02-26 2020-06-09 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US10679393B2 (en) 2018-07-24 2020-06-09 Snap Inc. Conditional modification of augmented reality object
US10678818B2 (en) 2018-01-03 2020-06-09 Snap Inc. Tag distribution visualization system
US10740974B1 (en) 2017-09-15 2020-08-11 Snap Inc. Augmented reality system
US10805676B2 (en) 2017-07-10 2020-10-13 Sony Corporation Modifying display region for people with macular degeneration
US10805696B1 (en) 2016-06-20 2020-10-13 Pipbin, Inc. System for recording and targeting tagged content of user interest
US10817898B2 (en) 2015-08-13 2020-10-27 Placed, Llc Determining exposures to content presented by physical objects
US10824654B2 (en) * 2014-09-18 2020-11-03 Snap Inc. Geolocation-based pictographs
US10834525B2 (en) 2016-02-26 2020-11-10 Snap Inc. Generation, curation, and presentation of media collections
US10839219B1 (en) 2016-06-20 2020-11-17 Pipbin, Inc. System for curation, distribution and display of location-dependent augmented reality content
US10845954B2 (en) 2017-07-11 2020-11-24 Sony Corporation Presenting audio video display options as list or matrix
US10862951B1 (en) 2007-01-05 2020-12-08 Snap Inc. Real-time display of multiple images
US10885136B1 (en) 2018-02-28 2021-01-05 Snap Inc. Audience filtering system
US10911575B1 (en) 2015-05-05 2021-02-02 Snap Inc. Systems and methods for story and sub-story navigation
US10915911B2 (en) 2017-02-03 2021-02-09 Snap Inc. System to determine a price-schedule to distribute media content
US10928915B2 (en) * 2016-02-10 2021-02-23 Disney Enterprises, Inc. Distributed storytelling environment
US10933311B2 (en) 2018-03-14 2021-03-02 Snap Inc. Generating collectible items based on location information
US10948717B1 (en) 2015-03-23 2021-03-16 Snap Inc. Reducing boot time and power consumption in wearable display systems
US10952013B1 (en) 2017-04-27 2021-03-16 Snap Inc. Selective location-based identity communication
US10963529B1 (en) 2017-04-27 2021-03-30 Snap Inc. Location-based search mechanism in a graphical user interface
US10979752B1 (en) 2018-02-28 2021-04-13 Snap Inc. Generating media content items based on location information
US10993069B2 (en) 2015-07-16 2021-04-27 Snap Inc. Dynamically adaptive media content delivery
US10997760B2 (en) 2018-08-31 2021-05-04 Snap Inc. Augmented reality anthropomorphization system
US11017173B1 (en) 2017-12-22 2021-05-25 Snap Inc. Named entity recognition visual context and caption data
US11023514B2 (en) 2016-02-26 2021-06-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US11030787B2 (en) 2017-10-30 2021-06-08 Snap Inc. Mobile-based cartographic control of display content
US11038829B1 (en) 2014-10-02 2021-06-15 Snap Inc. Ephemeral gallery of ephemeral messages with opt-in permanence
US11037372B2 (en) 2017-03-06 2021-06-15 Snap Inc. Virtual vision system
US11044393B1 (en) 2016-06-20 2021-06-22 Pipbin, Inc. System for curation and display of location-dependent augmented reality content in an augmented estate system
US11128715B1 (en) 2019-12-30 2021-09-21 Snap Inc. Physical friend proximity in chat
US11163941B1 (en) 2018-03-30 2021-11-02 Snap Inc. Annotating a collection of media content items
US11170393B1 (en) 2017-04-11 2021-11-09 Snap Inc. System to calculate an engagement score of location based media content
US11182383B1 (en) 2012-02-24 2021-11-23 Placed, Llc System and method for data collection to validate location data
US11201981B1 (en) 2016-06-20 2021-12-14 Pipbin, Inc. System for notification of user accessibility of curated location-dependent content in an augmented estate
US11199957B1 (en) 2018-11-30 2021-12-14 Snap Inc. Generating customized avatars based on location information
US11206615B2 (en) 2019-05-30 2021-12-21 Snap Inc. Wearable device location systems
US11216869B2 (en) 2014-09-23 2022-01-04 Snap Inc. User interface to augment an image using geolocation
US11218838B2 (en) 2019-10-31 2022-01-04 Snap Inc. Focused map-based context information surfacing
US11228551B1 (en) 2020-02-12 2022-01-18 Snap Inc. Multiple gateway message exchange
US11232040B1 (en) 2017-04-28 2022-01-25 Snap Inc. Precaching unlockable data elements
US11234548B1 (en) * 2021-04-05 2022-02-01 Ray Franklin Interchangeable seasonal holiday decorative method and devices
US11250075B1 (en) 2017-02-17 2022-02-15 Snap Inc. Searching social media content
US11249617B1 (en) 2015-01-19 2022-02-15 Snap Inc. Multichannel system
US11249614B2 (en) 2019-03-28 2022-02-15 Snap Inc. Generating personalized map interface with enhanced icons
US11265273B1 (en) 2017-12-01 2022-03-01 Snap, Inc. Dynamic media overlay with smart widget
US11290851B2 (en) 2020-06-15 2022-03-29 Snap Inc. Location sharing using offline and online objects
US11297399B1 (en) 2017-03-27 2022-04-05 Snap Inc. Generating a stitched data stream
US11294936B1 (en) 2019-01-30 2022-04-05 Snap Inc. Adaptive spatial density based clustering
US11301117B2 (en) 2019-03-08 2022-04-12 Snap Inc. Contextual information in chat
US11314776B2 (en) 2020-06-15 2022-04-26 Snap Inc. Location sharing using friend list versions
US11343323B2 (en) 2019-12-31 2022-05-24 Snap Inc. Augmented reality objects registry
US11349796B2 (en) 2017-03-27 2022-05-31 Snap Inc. Generating a stitched data stream
US11361493B2 (en) 2019-04-01 2022-06-14 Snap Inc. Semantic texture mapping system
US11373635B2 (en) * 2018-01-10 2022-06-28 Sony Corporation Information processing apparatus that fades system utterance in response to interruption
US11372608B2 (en) 2014-12-19 2022-06-28 Snap Inc. Gallery of messages from individuals with a shared interest
US11388226B1 (en) 2015-01-13 2022-07-12 Snap Inc. Guided personal identity based actions
US11430091B2 (en) 2020-03-27 2022-08-30 Snap Inc. Location mapping for large scale augmented-reality
US11429618B2 (en) 2019-12-30 2022-08-30 Snap Inc. Surfacing augmented reality objects
US11455082B2 (en) 2018-09-28 2022-09-27 Snap Inc. Collaborative achievement interface
US20220319124A1 (en) * 2021-03-31 2022-10-06 Snap Inc. Auto-filling virtual content
US11475254B1 (en) 2017-09-08 2022-10-18 Snap Inc. Multimodal entity identification
US20220337945A1 (en) * 2019-08-14 2022-10-20 Harman International Industries, Incorporated Selective sound modification for video communication
US11483267B2 (en) 2020-06-15 2022-10-25 Snap Inc. Location sharing using different rate-limited links
US11503432B2 (en) 2020-06-15 2022-11-15 Snap Inc. Scalable real-time location sharing framework
US11500525B2 (en) 2019-02-25 2022-11-15 Snap Inc. Custom media overlay system
US11507614B1 (en) 2018-02-13 2022-11-22 Snap Inc. Icon based tagging
US11516167B2 (en) 2020-03-05 2022-11-29 Snap Inc. Storing data based on device location
US11558709B2 (en) 2018-11-30 2023-01-17 Snap Inc. Position service to determine relative position to map features
US11574431B2 (en) 2019-02-26 2023-02-07 Snap Inc. Avatar based on weather
US11601888B2 (en) 2021-03-29 2023-03-07 Snap Inc. Determining location using multi-source geolocation data
US11601783B2 (en) 2019-06-07 2023-03-07 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11606755B2 (en) 2019-05-30 2023-03-14 Snap Inc. Wearable device location systems architecture
US11616745B2 (en) 2017-01-09 2023-03-28 Snap Inc. Contextual generation and selection of customized media content
US11619501B2 (en) 2020-03-11 2023-04-04 Snap Inc. Avatar based on trip
US11625443B2 (en) 2014-06-05 2023-04-11 Snap Inc. Web document enhancement
US11631276B2 (en) 2016-03-31 2023-04-18 Snap Inc. Automated avatar generation
US11645324B2 (en) 2021-03-31 2023-05-09 Snap Inc. Location-based timeline media content system
US11676378B2 (en) 2020-06-29 2023-06-13 Snap Inc. Providing travel-based augmented reality content with a captured image
US11675831B2 (en) 2017-05-31 2023-06-13 Snap Inc. Geolocation based playlists
US11714535B2 (en) 2019-07-11 2023-08-01 Snap Inc. Edge gesture interface with smart interactions
US11734712B2 (en) 2012-02-24 2023-08-22 Foursquare Labs, Inc. Attributing in-store visits to media consumption based on data collected from user devices
US11741520B2 (en) * 2019-08-29 2023-08-29 Capital One Services, Llc Methods and systems for providing crowd information and repeat information related to an establishment using digital image information
US11751015B2 (en) 2019-01-16 2023-09-05 Snap Inc. Location-based context information sharing in a messaging system
US11776256B2 (en) 2020-03-27 2023-10-03 Snap Inc. Shared augmented reality system
US11785161B1 (en) 2016-06-20 2023-10-10 Pipbin, Inc. System for user accessibility of tagged curated augmented reality content
US11799811B2 (en) 2018-10-31 2023-10-24 Snap Inc. Messaging and gaming applications communication platform
US11809624B2 (en) 2019-02-13 2023-11-07 Snap Inc. Sleep detection in a location sharing system
US11816853B2 (en) 2016-08-30 2023-11-14 Snap Inc. Systems and methods for simultaneous localization and mapping
US11821742B2 (en) 2019-09-26 2023-11-21 Snap Inc. Travel based notifications
US11829834B2 (en) 2021-10-29 2023-11-28 Snap Inc. Extended QR code
US11843456B2 (en) 2016-10-24 2023-12-12 Snap Inc. Generating and displaying customized avatars in media overlays
US11842411B2 (en) 2017-04-27 2023-12-12 Snap Inc. Location-based virtual avatars
US11852554B1 (en) 2019-03-21 2023-12-26 Snap Inc. Barometer calibration in a location sharing system
US11860888B2 (en) 2018-05-22 2024-01-02 Snap Inc. Event detection system
US11870743B1 (en) 2017-01-23 2024-01-09 Snap Inc. Customized digital avatar accessories
US11868414B1 (en) 2019-03-14 2024-01-09 Snap Inc. Graph-based prediction for contact suggestion in a location sharing system
US11876941B1 (en) 2016-06-20 2024-01-16 Pipbin, Inc. Clickable augmented reality content manager, system, and network
US11877211B2 (en) 2019-01-14 2024-01-16 Snap Inc. Destination sharing in location sharing system
US11893208B2 (en) 2019-12-31 2024-02-06 Snap Inc. Combined map icon with action indicator
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars
US11943192B2 (en) 2020-08-31 2024-03-26 Snap Inc. Co-location connection service
US11972529B2 (en) 2019-02-01 2024-04-30 Snap Inc. Augmented reality system
US12001750B2 (en) 2022-04-20 2024-06-04 Snap Inc. Location-based shared augmented reality experience system
US12020384B2 (en) 2022-06-21 2024-06-25 Snap Inc. Integrating augmented reality experiences with other components
US12020386B2 (en) 2022-06-23 2024-06-25 Snap Inc. Applying pregenerated virtual experiences in new location
US12026362B2 (en) 2021-05-19 2024-07-02 Snap Inc. Video editing application for mobile devices
US12063423B1 (en) * 2018-09-24 2024-08-13 Nova Modum Inc Enhanced interactive web features for displaying and editing digital content
US12143884B2 (en) 2012-02-24 2024-11-12 Fouresquare Labs, Inc. Inference pipeline system and method
US12160792B2 (en) 2019-05-30 2024-12-03 Snap Inc. Wearable device location accuracy systems
US12164109B2 (en) 2022-04-29 2024-12-10 Snap Inc. AR/VR enabled contact lens
US12166839B2 (en) 2021-10-29 2024-12-10 Snap Inc. Accessing web-based fragments for display
US12216702B1 (en) 2015-12-08 2025-02-04 Snap Inc. Redirection to digital content based on image-search
US12225095B2 (en) 2023-03-27 2025-02-11 Snap Inc. Messaging achievement pictograph display system

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8806592B2 (en) 2011-01-21 2014-08-12 Authentify, Inc. Method for secure user and transaction authentication and risk management
US10581834B2 (en) * 2009-11-02 2020-03-03 Early Warning Services, Llc Enhancing transaction authentication with privacy and security enhanced internet geolocation and proximity
US9614845B2 (en) 2015-04-15 2017-04-04 Early Warning Services, Llc Anonymous authentication and remote wireless token access
US10084782B2 (en) 2015-09-21 2018-09-25 Early Warning Services, Llc Authenticator centralization and protection
US11589185B2 (en) * 2019-10-11 2023-02-21 David Hynds Method and system tool for playback of content on a mobile device using location data
US20210204116A1 (en) 2019-12-31 2021-07-01 Payfone, Inc. Identity verification platform
US12058528B2 (en) 2020-12-31 2024-08-06 Prove Identity, Inc. Identity network representation of communications device subscriber in a digital domain
US20220316901A1 (en) * 2021-04-02 2022-10-06 The Auto Club Group Artificial intelligence algorithm for implementing a crowd sourcing trip-planning system and method
US11823148B2 (en) 2021-04-15 2023-11-21 Bank Of America Corporation Augmented reality-enabled ATM for secure augmented reality check realization
US20230258459A1 (en) * 2022-02-17 2023-08-17 Bueller Rnds, Inc. Digital Wayfinding

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850610A (en) * 1996-10-25 1998-12-15 Sonics Associates, Inc. Method and apparatus for providing zoned communications
US20020183072A1 (en) * 2001-04-17 2002-12-05 Galia Steinbach BeyondguideTM method and system
US20050192025A1 (en) * 2002-04-22 2005-09-01 Kaplan Richard D. Method and apparatus for an interactive tour-guide system
US20080174676A1 (en) * 2007-01-24 2008-07-24 Squilla John R Producing enhanced photographic products from images captured at known events
US20100063726A1 (en) * 2007-01-17 2010-03-11 Rob Marjenberg Contextually enhanced multi channel location based tour guidance system
US20110256881A1 (en) * 2010-04-20 2011-10-20 Huang Ronald K Context-based reverse geocoding
US20120113144A1 (en) * 2010-11-08 2012-05-10 Suranjit Adhikari Augmented reality virtual guide system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6157841A (en) 1997-09-18 2000-12-05 At&T Corp. Cellular phone network that provides location-based information
CA2308957A1 (en) 2000-05-12 2001-11-12 Extenso Tech Inc. Multimedia tourist guide
US7146179B2 (en) 2002-03-26 2006-12-05 Parulski Kenneth A Portable imaging device employing geographic information to facilitate image access and viewing
US20080129528A1 (en) 2004-11-16 2008-06-05 Michael Phipps Guthrie Apparatus and method for guided tour
US20070008321A1 (en) 2005-07-11 2007-01-11 Eastman Kodak Company Identifying collection images with special events
US7463977B2 (en) 2006-02-24 2008-12-09 Barz Adventures Lp Location-relevant real-time multimedia delivery and control and editing systems and methods
US9066199B2 (en) * 2007-06-28 2015-06-23 Apple Inc. Location-aware mobile device
US8295855B2 (en) * 2008-11-04 2012-10-23 International Business Machines Corporation GPS driven architecture for delivery of location based multimedia and method of use
US8566348B2 (en) * 2010-05-24 2013-10-22 Intersect Ptp, Inc. Systems and methods for collaborative storytelling in a virtual space
US8396485B2 (en) * 2010-11-09 2013-03-12 Apple Inc. Beacon-based geofencing
US8405740B2 (en) 2011-06-24 2013-03-26 Eastman Kodak Company Guidance for image capture at different locations
US20130191211A1 (en) 2012-01-24 2013-07-25 Timothy L. Nichols Customizing printed products based on travel paths
US9247306B2 (en) 2012-05-21 2016-01-26 Intellectual Ventures Fund 83 Llc Forming a multimedia product using video chat

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850610A (en) * 1996-10-25 1998-12-15 Sonics Associates, Inc. Method and apparatus for providing zoned communications
US20020183072A1 (en) * 2001-04-17 2002-12-05 Galia Steinbach BeyondguideTM method and system
US20050192025A1 (en) * 2002-04-22 2005-09-01 Kaplan Richard D. Method and apparatus for an interactive tour-guide system
US20100063726A1 (en) * 2007-01-17 2010-03-11 Rob Marjenberg Contextually enhanced multi channel location based tour guidance system
US20080174676A1 (en) * 2007-01-24 2008-07-24 Squilla John R Producing enhanced photographic products from images captured at known events
US20110256881A1 (en) * 2010-04-20 2011-10-20 Huang Ronald K Context-based reverse geocoding
US20120113144A1 (en) * 2010-11-08 2012-05-10 Suranjit Adhikari Augmented reality virtual guide system

Cited By (401)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11588770B2 (en) 2007-01-05 2023-02-21 Snap Inc. Real-time display of multiple images
US10862951B1 (en) 2007-01-05 2020-12-08 Snap Inc. Real-time display of multiple images
US12212804B2 (en) 2011-07-12 2025-01-28 Snap Inc. Providing visual content editing functions
US10334307B2 (en) 2011-07-12 2019-06-25 Snap Inc. Methods and systems of providing visual content editing functions
US11451856B2 (en) 2011-07-12 2022-09-20 Snap Inc. Providing visual content editing functions
US10999623B2 (en) 2011-07-12 2021-05-04 Snap Inc. Providing visual content editing functions
US11750875B2 (en) 2011-07-12 2023-09-05 Snap Inc. Providing visual content editing functions
US11734712B2 (en) 2012-02-24 2023-08-22 Foursquare Labs, Inc. Attributing in-store visits to media consumption based on data collected from user devices
US12143884B2 (en) 2012-02-24 2024-11-12 Fouresquare Labs, Inc. Inference pipeline system and method
US11182383B1 (en) 2012-02-24 2021-11-23 Placed, Llc System and method for data collection to validate location data
US9195994B1 (en) 2012-04-25 2015-11-24 Wells Fargo Bank, N.A. System and method for a mobile wallet
US12086790B1 (en) 2012-04-25 2024-09-10 Wells Fargo Bank, N.A. System and method for a mobile wallet
US10062076B1 (en) 2012-04-25 2018-08-28 Wells Fargo Bank, N.A. System and method for a mobile wallet
US11113686B1 (en) 2012-04-25 2021-09-07 Wells Fargo Bank, N.A. System and method for a mobile wallet
US9037509B1 (en) * 2012-04-25 2015-05-19 Wells Fargo Bank, N.A. System and method for a mobile wallet
US9311654B1 (en) 2012-04-25 2016-04-12 Wells Fargo Bank, N.A. System and method for a mobile wallet
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars
US20150192988A1 (en) * 2014-01-06 2015-07-09 Hristo Aleksiev Augmented Reality System Incorporating Transforming Avatars
US9696796B2 (en) * 2014-01-06 2017-07-04 Playground Energy Ltd Augmented reality system incorporating transforming avatars
US20160202754A1 (en) * 2014-01-06 2016-07-14 Playground Energy Ltd Augmented Reality System Incorporating Transforming Avatars
US9323323B2 (en) * 2014-01-06 2016-04-26 Playground Energy Ltd Augmented reality system for playground equipment incorporating transforming avatars
US12127068B2 (en) 2014-01-12 2024-10-22 Investment Asset Holdings Llc Map interface with icon for location-based messages
US10349209B1 (en) 2014-01-12 2019-07-09 Investment Asset Holdings Llc Location-based messaging
US12200563B2 (en) 2014-01-12 2025-01-14 Investment Asset Holdings, Llc Map interface with message marker for location-based messages
US10080102B1 (en) 2014-01-12 2018-09-18 Investment Asset Holdings Llc Location-based messaging
US12041508B1 (en) 2014-01-12 2024-07-16 Investment Asset Holdings Llc Location-based messaging
US9866999B1 (en) 2014-01-12 2018-01-09 Investment Asset Holdings Llc Location-based messaging
US10594929B2 (en) * 2014-02-21 2020-03-17 Colorvision International, Inc. Portable electronic device with a creative artworks picture application operating in response to beacons
US20180146134A1 (en) * 2014-02-21 2018-05-24 Colorvision International, Inc. Portable electronic device with a creative artworks picture application operating in response to beacons
US10834312B2 (en) * 2014-02-21 2020-11-10 Colorvision International, Inc. Portable electronic device with a creative artworks picture application operating in response to beacons
US10402867B2 (en) * 2014-03-25 2019-09-03 Nanyang Technological University Computerized method and system for personalized storytelling
US20170111542A1 (en) * 2014-04-02 2017-04-20 Canon Europa N.V. Print system, management server, client, method of operating a print system, method of operating a management server and method of operating a client
US9843698B2 (en) * 2014-04-02 2017-12-12 Canon Europa N.V. Method and system for performing a print process using a map
US10572681B1 (en) 2014-05-28 2020-02-25 Snap Inc. Apparatus and method for automated privacy protection in distributed images
US11972014B2 (en) 2014-05-28 2024-04-30 Snap Inc. Apparatus and method for automated privacy protection in distributed images
US10990697B2 (en) 2014-05-28 2021-04-27 Snap Inc. Apparatus and method for automated privacy protection in distributed images
US9752883B1 (en) * 2014-06-04 2017-09-05 Google Inc. Using current user context to determine mapping characteristics
US11921805B2 (en) 2014-06-05 2024-03-05 Snap Inc. Web document enhancement
US11625443B2 (en) 2014-06-05 2023-04-11 Snap Inc. Web document enhancement
US9825898B2 (en) 2014-06-13 2017-11-21 Snap Inc. Prioritization of messages within a message collection
US10448201B1 (en) 2014-06-13 2019-10-15 Snap Inc. Prioritization of messages within a message collection
US10623891B2 (en) 2014-06-13 2020-04-14 Snap Inc. Prioritization of messages within a message collection
US10659914B1 (en) 2014-06-13 2020-05-19 Snap Inc. Geo-location based event gallery
US11166121B2 (en) 2014-06-13 2021-11-02 Snap Inc. Prioritization of messages within a message collection
US11317240B2 (en) 2014-06-13 2022-04-26 Snap Inc. Geo-location based event gallery
US10524087B1 (en) 2014-06-13 2019-12-31 Snap Inc. Message destination list mechanism
US10200813B1 (en) 2014-06-13 2019-02-05 Snap Inc. Geo-location based event gallery
US10779113B2 (en) 2014-06-13 2020-09-15 Snap Inc. Prioritization of messages within a message collection
US10182311B2 (en) 2014-06-13 2019-01-15 Snap Inc. Prioritization of messages within a message collection
US10602057B1 (en) 2014-07-07 2020-03-24 Snap Inc. Supplying content aware photo filters
US11595569B2 (en) 2014-07-07 2023-02-28 Snap Inc. Supplying content aware photo filters
US10432850B1 (en) 2014-07-07 2019-10-01 Snap Inc. Apparatus and method for supplying content aware photo filters
US11122200B2 (en) 2014-07-07 2021-09-14 Snap Inc. Supplying content aware photo filters
US10154192B1 (en) 2014-07-07 2018-12-11 Snap Inc. Apparatus and method for supplying content aware photo filters
US11849214B2 (en) 2014-07-07 2023-12-19 Snap Inc. Apparatus and method for supplying content aware photo filters
US9293132B2 (en) * 2014-08-06 2016-03-22 Honda Motor Co., Ltd. Dynamic geo-fencing for voice recognition dictionary
US11625755B1 (en) 2014-09-16 2023-04-11 Foursquare Labs, Inc. Determining targeting information based on a predictive targeting model
US10423983B2 (en) 2014-09-16 2019-09-24 Snap Inc. Determining targeting information based on a predictive targeting model
US11741136B2 (en) * 2014-09-18 2023-08-29 Snap Inc. Geolocation-based pictographs
US11281701B2 (en) * 2014-09-18 2022-03-22 Snap Inc. Geolocation-based pictographs
US20220318281A1 (en) * 2014-09-18 2022-10-06 Snap Inc. Geolocation-based pictographs
US10824654B2 (en) * 2014-09-18 2020-11-03 Snap Inc. Geolocation-based pictographs
US20160088546A1 (en) * 2014-09-19 2016-03-24 Intel Corporation Regulation via geofence boundary segment crossings
CN106575425A (en) * 2014-09-19 2017-04-19 英特尔公司 Regulation via geofence boundary segment crossings
US11216869B2 (en) 2014-09-23 2022-01-04 Snap Inc. User interface to augment an image using geolocation
US10476830B2 (en) 2014-10-02 2019-11-12 Snap Inc. Ephemeral gallery of ephemeral messages
US12155617B1 (en) 2014-10-02 2024-11-26 Snap Inc. Automated chronological display of ephemeral message gallery
US20170374003A1 (en) 2014-10-02 2017-12-28 Snapchat, Inc. Ephemeral gallery of ephemeral messages
US12155618B2 (en) 2014-10-02 2024-11-26 Snap Inc. Ephemeral message collection UI indicia
US11038829B1 (en) 2014-10-02 2021-06-15 Snap Inc. Ephemeral gallery of ephemeral messages with opt-in permanence
US11522822B1 (en) 2014-10-02 2022-12-06 Snap Inc. Ephemeral gallery elimination based on gallery and message timers
US12113764B2 (en) 2014-10-02 2024-10-08 Snap Inc. Automated management of ephemeral message collections
US11411908B1 (en) 2014-10-02 2022-08-09 Snap Inc. Ephemeral message gallery user interface with online viewing history indicia
US9465788B2 (en) 2014-10-09 2016-10-11 Wrap Media, LLC Authoring tool for the authoring of wrap packages of cards
US9600449B2 (en) 2014-10-09 2017-03-21 Wrap Media, LLC Authoring tool for the authoring of wrap packages of cards
US9448988B2 (en) * 2014-10-09 2016-09-20 Wrap Media Llc Authoring tool for the authoring of wrap packages of cards
US9442906B2 (en) * 2014-10-09 2016-09-13 Wrap Media, LLC Wrap descriptor for defining a wrap package of cards including a global component
US9418056B2 (en) * 2014-10-09 2016-08-16 Wrap Media, LLC Authoring tool for the authoring of wrap packages of cards
US9600464B2 (en) * 2014-10-09 2017-03-21 Wrap Media, LLC Authoring tool for the authoring of wrap packages of cards
US9652124B2 (en) 2014-10-31 2017-05-16 Microsoft Technology Licensing, Llc Use of beacons for assistance to users in interacting with their environments
WO2016069668A1 (en) * 2014-10-31 2016-05-06 Microsoft Technology Licensing, Llc User interface functionality for facilitating interaction between users and their environments
US9977573B2 (en) 2014-10-31 2018-05-22 Microsoft Technology Licensing, Llc Facilitating interaction between users and their environments using a headset having input mechanisms
US10048835B2 (en) 2014-10-31 2018-08-14 Microsoft Technology Licensing, Llc User interface functionality for facilitating interaction between users and their environments
US9612722B2 (en) 2014-10-31 2017-04-04 Microsoft Technology Licensing, Llc Facilitating interaction between users and their environments using sounds
US11190679B2 (en) 2014-11-12 2021-11-30 Snap Inc. Accessing media at a geographic location
US9843720B1 (en) 2014-11-12 2017-12-12 Snap Inc. User interface for accessing media at a geographic location
US11956533B2 (en) 2014-11-12 2024-04-09 Snap Inc. Accessing media at a geographic location
US10616476B1 (en) 2014-11-12 2020-04-07 Snap Inc. User interface for accessing media at a geographic location
US10811053B2 (en) 2014-12-19 2020-10-20 Snap Inc. Routing messages by message parameter
US11803345B2 (en) 2014-12-19 2023-10-31 Snap Inc. Gallery of messages from individuals with a shared interest
US11783862B2 (en) 2014-12-19 2023-10-10 Snap Inc. Routing messages by message parameter
US10580458B2 (en) 2014-12-19 2020-03-03 Snap Inc. Gallery of videos set to an audio time line
US11250887B2 (en) 2014-12-19 2022-02-15 Snap Inc. Routing messages by message parameter
US11372608B2 (en) 2014-12-19 2022-06-28 Snap Inc. Gallery of messages from individuals with a shared interest
US11301960B2 (en) 2015-01-09 2022-04-12 Snap Inc. Object recognition based image filters
US12056182B2 (en) 2015-01-09 2024-08-06 Snap Inc. Object recognition based image overlays
US10157449B1 (en) 2015-01-09 2018-12-18 Snap Inc. Geo-location-based image filters
US10380720B1 (en) 2015-01-09 2019-08-13 Snap Inc. Location-based image filters
US11734342B2 (en) 2015-01-09 2023-08-22 Snap Inc. Object recognition based image overlays
US11388226B1 (en) 2015-01-13 2022-07-12 Snap Inc. Guided personal identity based actions
US11962645B2 (en) 2015-01-13 2024-04-16 Snap Inc. Guided personal identity based actions
US11249617B1 (en) 2015-01-19 2022-02-15 Snap Inc. Multichannel system
US11528579B2 (en) 2015-01-26 2022-12-13 Snap Inc. Content request by location
US11910267B2 (en) 2015-01-26 2024-02-20 Snap Inc. Content request by location
US10123166B2 (en) 2015-01-26 2018-11-06 Snap Inc. Content request by location
US10536800B1 (en) 2015-01-26 2020-01-14 Snap Inc. Content request by location
US10932085B1 (en) 2015-01-26 2021-02-23 Snap Inc. Content request by location
US10223397B1 (en) 2015-03-13 2019-03-05 Snap Inc. Social graph based co-location of network users
US10893055B2 (en) 2015-03-18 2021-01-12 Snap Inc. Geo-fence authorization provisioning
US10616239B2 (en) 2015-03-18 2020-04-07 Snap Inc. Geo-fence authorization provisioning
US11902287B2 (en) 2015-03-18 2024-02-13 Snap Inc. Geo-fence authorization provisioning
US10948717B1 (en) 2015-03-23 2021-03-16 Snap Inc. Reducing boot time and power consumption in wearable display systems
US11662576B2 (en) 2015-03-23 2023-05-30 Snap Inc. Reducing boot time and power consumption in displaying data content
US11320651B2 (en) 2015-03-23 2022-05-03 Snap Inc. Reducing boot time and power consumption in displaying data content
US12164105B2 (en) 2015-03-23 2024-12-10 Snap Inc. Reducing boot time and power consumption in displaying data content
US20160284112A1 (en) * 2015-03-26 2016-09-29 Wrap Media, LLC Authoring tool for the mixing of cards of wrap packages
US9582917B2 (en) * 2015-03-26 2017-02-28 Wrap Media, LLC Authoring tool for the mixing of cards of wrap packages
US9600803B2 (en) 2015-03-26 2017-03-21 Wrap Media, LLC Mobile-first authoring tool for the authoring of wrap packages
CN113282771A (en) * 2015-05-05 2021-08-20 斯纳普公司 Automated local story generation and curation
US9881094B2 (en) 2015-05-05 2018-01-30 Snap Inc. Systems and methods for automated local story generation and curation
EP4361845A1 (en) * 2015-05-05 2024-05-01 Snap Inc. Automated local story generation and curation
US11449539B2 (en) 2015-05-05 2022-09-20 Snap Inc. Automated local story generation and curation
US10911575B1 (en) 2015-05-05 2021-02-02 Snap Inc. Systems and methods for story and sub-story navigation
US11496544B2 (en) 2015-05-05 2022-11-08 Snap Inc. Story and sub-story navigation
EP3292477A4 (en) * 2015-05-05 2018-04-11 Snap Inc. Automated local story generation and curation
US11392633B2 (en) 2015-05-05 2022-07-19 Snap Inc. Systems and methods for automated local story generation and curation
US10592574B2 (en) 2015-05-05 2020-03-17 Snap Inc. Systems and methods for automated local story generation and curation
US10993069B2 (en) 2015-07-16 2021-04-27 Snap Inc. Dynamically adaptive media content delivery
US20170041765A1 (en) * 2015-08-03 2017-02-09 Fujitsu Limited Information delivery method, information delivery apparatus, and storage medium
US10123180B2 (en) * 2015-08-03 2018-11-06 Fujitsu Limited Information delivery method, information delivery apparatus, and storage medium
JP2017033340A (en) * 2015-08-03 2017-02-09 富士通株式会社 Information distribution method, information distribution program, and information distribution device
US11961116B2 (en) 2015-08-13 2024-04-16 Foursquare Labs, Inc. Determining exposures to content presented by physical objects
US10817898B2 (en) 2015-08-13 2020-10-27 Placed, Llc Determining exposures to content presented by physical objects
US10228893B2 (en) * 2015-10-28 2019-03-12 Paypal, Inc. Private virtual object handling
US11669292B2 (en) 2015-10-28 2023-06-06 Paypal, Inc. Private virtual object handling
US20170123750A1 (en) * 2015-10-28 2017-05-04 Paypal, Inc. Private virtual object handling
US11269581B2 (en) 2015-10-28 2022-03-08 Paypal, Inc. Private virtual object handling
US10733802B2 (en) 2015-10-30 2020-08-04 Snap Inc. Image based tracking in augmented reality systems
US11769307B2 (en) 2015-10-30 2023-09-26 Snap Inc. Image based tracking in augmented reality systems
US10366543B1 (en) 2015-10-30 2019-07-30 Snap Inc. Image based tracking in augmented reality systems
US11315331B2 (en) 2015-10-30 2022-04-26 Snap Inc. Image based tracking in augmented reality systems
US10102680B2 (en) 2015-10-30 2018-10-16 Snap Inc. Image based tracking in augmented reality systems
US11380051B2 (en) 2015-11-30 2022-07-05 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US11599241B2 (en) 2015-11-30 2023-03-07 Snap Inc. Network resource location linking and visual content sharing
US12079931B2 (en) 2015-11-30 2024-09-03 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US10657708B1 (en) 2015-11-30 2020-05-19 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US10997783B2 (en) 2015-11-30 2021-05-04 Snap Inc. Image and point cloud based tracking and in augmented reality systems
US10474321B2 (en) 2015-11-30 2019-11-12 Snap Inc. Network resource location linking and visual content sharing
US12216702B1 (en) 2015-12-08 2025-02-04 Snap Inc. Redirection to digital content based on image-search
US11468615B2 (en) 2015-12-18 2022-10-11 Snap Inc. Media overlay publication system
US11830117B2 (en) 2015-12-18 2023-11-28 Snap Inc Media overlay publication system
US10354425B2 (en) 2015-12-18 2019-07-16 Snap Inc. Method and system for providing context relevant media augmentation
US10928915B2 (en) * 2016-02-10 2021-02-23 Disney Enterprises, Inc. Distributed storytelling environment
US11611846B2 (en) 2016-02-26 2023-03-21 Snap Inc. Generation, curation, and presentation of media collections
US11023514B2 (en) 2016-02-26 2021-06-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US11197123B2 (en) 2016-02-26 2021-12-07 Snap Inc. Generation, curation, and presentation of media collections
US10679389B2 (en) 2016-02-26 2020-06-09 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US11889381B2 (en) 2016-02-26 2024-01-30 Snap Inc. Generation, curation, and presentation of media collections
US10834525B2 (en) 2016-02-26 2020-11-10 Snap Inc. Generation, curation, and presentation of media collections
US11631276B2 (en) 2016-03-31 2023-04-18 Snap Inc. Automated avatar generation
US11627194B2 (en) 2016-04-06 2023-04-11 Snap Inc. Messaging achievement pictograph display system
US10686899B2 (en) * 2016-04-06 2020-06-16 Snap Inc. Messaging achievement pictograph display system
US20170295250A1 (en) * 2016-04-06 2017-10-12 Snapchat, Inc. Messaging achievement pictograph display system
US10805696B1 (en) 2016-06-20 2020-10-13 Pipbin, Inc. System for recording and targeting tagged content of user interest
US11785161B1 (en) 2016-06-20 2023-10-10 Pipbin, Inc. System for user accessibility of tagged curated augmented reality content
US10638256B1 (en) 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
US11044393B1 (en) 2016-06-20 2021-06-22 Pipbin, Inc. System for curation and display of location-dependent augmented reality content in an augmented estate system
US11876941B1 (en) 2016-06-20 2024-01-16 Pipbin, Inc. Clickable augmented reality content manager, system, and network
US10839219B1 (en) 2016-06-20 2020-11-17 Pipbin, Inc. System for curation, distribution and display of location-dependent augmented reality content
US10992836B2 (en) 2016-06-20 2021-04-27 Pipbin, Inc. Augmented property system of curated augmented reality media elements
US12192426B2 (en) 2016-06-20 2025-01-07 Pipbin, Inc. Device and system for recording and reading augmented reality content
US11201981B1 (en) 2016-06-20 2021-12-14 Pipbin, Inc. System for notification of user accessibility of curated location-dependent content in an augmented estate
US11640625B2 (en) 2016-06-28 2023-05-02 Snap Inc. Generation, curation, and presentation of media collections with automated advertising
US10219110B2 (en) 2016-06-28 2019-02-26 Snap Inc. System to track engagement of media items
US10885559B1 (en) 2016-06-28 2021-01-05 Snap Inc. Generation, curation, and presentation of media collections with automated advertising
US10506371B2 (en) 2016-06-28 2019-12-10 Snap Inc. System to track engagement of media items
US10430838B1 (en) 2016-06-28 2019-10-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections with automated advertising
US12033191B2 (en) 2016-06-28 2024-07-09 Snap Inc. Generation, curation, and presentation of media collections with automated advertising
US10165402B1 (en) 2016-06-28 2018-12-25 Snap Inc. System to track engagement of media items
US10735892B2 (en) 2016-06-28 2020-08-04 Snap Inc. System to track engagement of media items
US10785597B2 (en) 2016-06-28 2020-09-22 Snap Inc. System to track engagement of media items
US11445326B2 (en) 2016-06-28 2022-09-13 Snap Inc. Track engagement of media items
US10327100B1 (en) 2016-06-28 2019-06-18 Snap Inc. System to track engagement of media items
US10387514B1 (en) 2016-06-30 2019-08-20 Snap Inc. Automated content curation and communication
US11080351B1 (en) 2016-06-30 2021-08-03 Snap Inc. Automated content curation and communication
US11895068B2 (en) 2016-06-30 2024-02-06 Snap Inc. Automated content curation and communication
US10348662B2 (en) 2016-07-19 2019-07-09 Snap Inc. Generating customized electronic messaging graphics
US11509615B2 (en) 2016-07-19 2022-11-22 Snap Inc. Generating customized electronic messaging graphics
US12002232B2 (en) 2016-08-30 2024-06-04 Snap Inc. Systems and methods for simultaneous localization and mapping
US11816853B2 (en) 2016-08-30 2023-11-14 Snap Inc. Systems and methods for simultaneous localization and mapping
US12113760B2 (en) 2016-10-24 2024-10-08 Snap Inc. Generating and displaying customized avatars in media overlays
US11843456B2 (en) 2016-10-24 2023-12-12 Snap Inc. Generating and displaying customized avatars in media overlays
US12206635B2 (en) 2016-10-24 2025-01-21 Snap Inc. Generating and displaying customized avatars in electronic messages
US11876762B1 (en) 2016-10-24 2024-01-16 Snap Inc. Generating and displaying customized avatars in media overlays
US11233952B2 (en) 2016-11-07 2022-01-25 Snap Inc. Selective identification and order of image modifiers
US10623666B2 (en) 2016-11-07 2020-04-14 Snap Inc. Selective identification and order of image modifiers
US11750767B2 (en) 2016-11-07 2023-09-05 Snap Inc. Selective identification and order of image modifiers
US12099707B2 (en) 2016-12-09 2024-09-24 Snap Inc. Customized media overlays
US10203855B2 (en) 2016-12-09 2019-02-12 Snap Inc. Customized user-controlled media overlays
US11397517B2 (en) 2016-12-09 2022-07-26 Snap Inc. Customized media overlays
US10754525B1 (en) 2016-12-09 2020-08-25 Snap Inc. Customized media overlays
US12028301B2 (en) 2017-01-09 2024-07-02 Snap Inc. Contextual generation and selection of customized media content
US11616745B2 (en) 2017-01-09 2023-03-28 Snap Inc. Contextual generation and selection of customized media content
US11870743B1 (en) 2017-01-23 2024-01-09 Snap Inc. Customized digital avatar accessories
US10915911B2 (en) 2017-02-03 2021-02-09 Snap Inc. System to determine a price-schedule to distribute media content
US10319149B1 (en) 2017-02-17 2019-06-11 Snap Inc. Augmented reality anamorphosis system
US11250075B1 (en) 2017-02-17 2022-02-15 Snap Inc. Searching social media content
US12050654B2 (en) 2017-02-17 2024-07-30 Snap Inc. Searching social media content
US11861795B1 (en) 2017-02-17 2024-01-02 Snap Inc. Augmented reality anamorphosis system
US11720640B2 (en) 2017-02-17 2023-08-08 Snap Inc. Searching social media content
US12197884B2 (en) 2017-02-20 2025-01-14 Snap Inc. Augmented reality speech balloon system
US11748579B2 (en) 2017-02-20 2023-09-05 Snap Inc. Augmented reality speech balloon system
US10614828B1 (en) 2017-02-20 2020-04-07 Snap Inc. Augmented reality speech balloon system
US11189299B1 (en) 2017-02-20 2021-11-30 Snap Inc. Augmented reality speech balloon system
US11961196B2 (en) 2017-03-06 2024-04-16 Snap Inc. Virtual vision system
US11670057B2 (en) 2017-03-06 2023-06-06 Snap Inc. Virtual vision system
US11037372B2 (en) 2017-03-06 2021-06-15 Snap Inc. Virtual vision system
US11258749B2 (en) 2017-03-09 2022-02-22 Snap Inc. Restricted group content collection
US12047344B2 (en) 2017-03-09 2024-07-23 Snap Inc. Restricted group content collection
US10523625B1 (en) 2017-03-09 2019-12-31 Snap Inc. Restricted group content collection
US10887269B1 (en) 2017-03-09 2021-01-05 Snap Inc. Restricted group content collection
US11349796B2 (en) 2017-03-27 2022-05-31 Snap Inc. Generating a stitched data stream
US11558678B2 (en) 2017-03-27 2023-01-17 Snap Inc. Generating a stitched data stream
US11297399B1 (en) 2017-03-27 2022-04-05 Snap Inc. Generating a stitched data stream
US11170393B1 (en) 2017-04-11 2021-11-09 Snap Inc. System to calculate an engagement score of location based media content
US10387730B1 (en) 2017-04-20 2019-08-20 Snap Inc. Augmented reality typography personalization system
US11195018B1 (en) 2017-04-20 2021-12-07 Snap Inc. Augmented reality typography personalization system
US12033253B2 (en) 2017-04-20 2024-07-09 Snap Inc. Augmented reality typography personalization system
US11782574B2 (en) 2017-04-27 2023-10-10 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11385763B2 (en) 2017-04-27 2022-07-12 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US12131003B2 (en) 2017-04-27 2024-10-29 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11409407B2 (en) 2017-04-27 2022-08-09 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US11451956B1 (en) 2017-04-27 2022-09-20 Snap Inc. Location privacy management on map-based social media platforms
US12112013B2 (en) 2017-04-27 2024-10-08 Snap Inc. Location privacy management on map-based social media platforms
US10952013B1 (en) 2017-04-27 2021-03-16 Snap Inc. Selective location-based identity communication
US11893647B2 (en) 2017-04-27 2024-02-06 Snap Inc. Location-based virtual avatars
US11474663B2 (en) 2017-04-27 2022-10-18 Snap Inc. Location-based search mechanism in a graphical user interface
US11392264B1 (en) 2017-04-27 2022-07-19 Snap Inc. Map-based graphical user interface for multi-type social media galleries
US10963529B1 (en) 2017-04-27 2021-03-30 Snap Inc. Location-based search mechanism in a graphical user interface
US12086381B2 (en) 2017-04-27 2024-09-10 Snap Inc. Map-based graphical user interface for multi-type social media galleries
US11418906B2 (en) 2017-04-27 2022-08-16 Snap Inc. Selective location-based identity communication
US11556221B2 (en) 2017-04-27 2023-01-17 Snap Inc. Friend location sharing mechanism for social media platforms
US11842411B2 (en) 2017-04-27 2023-12-12 Snap Inc. Location-based virtual avatars
US11995288B2 (en) 2017-04-27 2024-05-28 Snap Inc. Location-based search mechanism in a graphical user interface
US12058583B2 (en) 2017-04-27 2024-08-06 Snap Inc. Selective location-based identity communication
US11232040B1 (en) 2017-04-28 2022-01-25 Snap Inc. Precaching unlockable data elements
US9858943B1 (en) 2017-05-09 2018-01-02 Sony Corporation Accessibility for the hearing impaired using measurement and object based audio
US20200045300A1 (en) * 2017-05-22 2020-02-06 Fyusion, Inc. Inertial measurement unit progress estimation
US10645371B2 (en) * 2017-05-22 2020-05-05 Fyusion, Inc. Inertial measurement unit progress estimation
US12189685B2 (en) 2017-05-31 2025-01-07 Snap Inc. Geolocation based playlists
US11675831B2 (en) 2017-05-31 2023-06-13 Snap Inc. Geolocation based playlists
US10650702B2 (en) 2017-07-10 2020-05-12 Sony Corporation Modifying display region for people with loss of peripheral vision
US10805676B2 (en) 2017-07-10 2020-10-13 Sony Corporation Modifying display region for people with macular degeneration
US10051331B1 (en) * 2017-07-11 2018-08-14 Sony Corporation Quick accessibility profiles
US10845954B2 (en) 2017-07-11 2020-11-24 Sony Corporation Presenting audio video display options as list or matrix
US10303427B2 (en) 2017-07-11 2019-05-28 Sony Corporation Moving audio from center speaker to peripheral speaker of display device for macular degeneration accessibility
CN110574090A (en) * 2017-07-14 2019-12-13 株式会社小松制作所 Vehicle management device, vehicle management method, and program
JPWO2019013008A1 (en) * 2017-07-14 2019-11-07 株式会社小松製作所 Vehicle management apparatus, vehicle management method, and program
WO2019013008A1 (en) * 2017-07-14 2019-01-17 株式会社小松製作所 Vehicle management device, vehicle management method, and program
US11109183B2 (en) * 2017-07-14 2021-08-31 Komatsu Ltd. Vehicle management device, vehicle management method, and program
US11475254B1 (en) 2017-09-08 2022-10-18 Snap Inc. Multimodal entity identification
US12164603B2 (en) 2017-09-08 2024-12-10 Snap Inc. Multimodal entity identification
US11721080B2 (en) 2017-09-15 2023-08-08 Snap Inc. Augmented reality system
US10740974B1 (en) 2017-09-15 2020-08-11 Snap Inc. Augmented reality system
US11335067B2 (en) 2017-09-15 2022-05-17 Snap Inc. Augmented reality system
US11617056B2 (en) 2017-10-09 2023-03-28 Snap Inc. Context sensitive presentation of content
US10499191B1 (en) 2017-10-09 2019-12-03 Snap Inc. Context sensitive presentation of content
US12010582B2 (en) 2017-10-09 2024-06-11 Snap Inc. Context sensitive presentation of content
US11006242B1 (en) 2017-10-09 2021-05-11 Snap Inc. Context sensitive presentation of content
US11030787B2 (en) 2017-10-30 2021-06-08 Snap Inc. Mobile-based cartographic control of display content
US11670025B2 (en) 2017-10-30 2023-06-06 Snap Inc. Mobile-based cartographic control of display content
US11943185B2 (en) 2017-12-01 2024-03-26 Snap Inc. Dynamic media overlay with smart widget
US11265273B1 (en) 2017-12-01 2022-03-01 Snap, Inc. Dynamic media overlay with smart widget
US11558327B2 (en) 2017-12-01 2023-01-17 Snap Inc. Dynamic media overlay with smart widget
US11687720B2 (en) 2017-12-22 2023-06-27 Snap Inc. Named entity recognition visual context and caption data
US12056454B2 (en) 2017-12-22 2024-08-06 Snap Inc. Named entity recognition visual context and caption data
US11017173B1 (en) 2017-12-22 2021-05-25 Snap Inc. Named entity recognition visual context and caption data
US11487794B2 (en) 2018-01-03 2022-11-01 Snap Inc. Tag distribution visualization system
US10678818B2 (en) 2018-01-03 2020-06-09 Snap Inc. Tag distribution visualization system
US11983215B2 (en) 2018-01-03 2024-05-14 Snap Inc. Tag distribution visualization system
US11373635B2 (en) * 2018-01-10 2022-06-28 Sony Corporation Information processing apparatus that fades system utterance in response to interruption
US11841896B2 (en) 2018-02-13 2023-12-12 Snap Inc. Icon based tagging
US11507614B1 (en) 2018-02-13 2022-11-22 Snap Inc. Icon based tagging
US10885136B1 (en) 2018-02-28 2021-01-05 Snap Inc. Audience filtering system
US10979752B1 (en) 2018-02-28 2021-04-13 Snap Inc. Generating media content items based on location information
US11523159B2 (en) 2018-02-28 2022-12-06 Snap Inc. Generating media content items based on location information
US10524088B2 (en) 2018-03-06 2019-12-31 Snap Inc. Geo-fence selection system
US11044574B2 (en) 2018-03-06 2021-06-22 Snap Inc. Geo-fence selection system
US11570572B2 (en) 2018-03-06 2023-01-31 Snap Inc. Geo-fence selection system
US10327096B1 (en) 2018-03-06 2019-06-18 Snap Inc. Geo-fence selection system
US11722837B2 (en) 2018-03-06 2023-08-08 Snap Inc. Geo-fence selection system
US11491393B2 (en) 2018-03-14 2022-11-08 Snap Inc. Generating collectible items based on location information
US11998833B2 (en) 2018-03-14 2024-06-04 Snap Inc. Generating collectible items based on location information
US10933311B2 (en) 2018-03-14 2021-03-02 Snap Inc. Generating collectible items based on location information
US12056441B2 (en) 2018-03-30 2024-08-06 Snap Inc. Annotating a collection of media content items
US11163941B1 (en) 2018-03-30 2021-11-02 Snap Inc. Annotating a collection of media content items
US10681491B1 (en) 2018-04-18 2020-06-09 Snap Inc. Visitation tracking system
US11683657B2 (en) 2018-04-18 2023-06-20 Snap Inc. Visitation tracking system
US10779114B2 (en) 2018-04-18 2020-09-15 Snap Inc. Visitation tracking system
US10219111B1 (en) 2018-04-18 2019-02-26 Snap Inc. Visitation tracking system
US10924886B2 (en) 2018-04-18 2021-02-16 Snap Inc. Visitation tracking system
US11297463B2 (en) 2018-04-18 2022-04-05 Snap Inc. Visitation tracking system
US10448199B1 (en) 2018-04-18 2019-10-15 Snap Inc. Visitation tracking system
US12035198B2 (en) 2018-04-18 2024-07-09 Snap Inc. Visitation tracking system
WO2019210172A1 (en) * 2018-04-27 2019-10-31 Sweny Tim System and method for evaluating and reserving rooms
US11860888B2 (en) 2018-05-22 2024-01-02 Snap Inc. Event detection system
US11616833B2 (en) * 2018-05-31 2023-03-28 Fujifilm Business Innovation Corp. Information processing apparatus and non-transitory computer readable medium storing program for service invitation
US20190373055A1 (en) * 2018-05-31 2019-12-05 Fuji Xerox Co., Ltd. Information processing apparatus and non-transitory computer readable medium storing program
US12039649B2 (en) 2018-07-24 2024-07-16 Snap Inc. Conditional modification of augmented reality object
US11670026B2 (en) 2018-07-24 2023-06-06 Snap Inc. Conditional modification of augmented reality object
US11367234B2 (en) 2018-07-24 2022-06-21 Snap Inc. Conditional modification of augmented reality object
US10789749B2 (en) 2018-07-24 2020-09-29 Snap Inc. Conditional modification of augmented reality object
US10943381B2 (en) 2018-07-24 2021-03-09 Snap Inc. Conditional modification of augmented reality object
US10679393B2 (en) 2018-07-24 2020-06-09 Snap Inc. Conditional modification of augmented reality object
US10997760B2 (en) 2018-08-31 2021-05-04 Snap Inc. Augmented reality anthropomorphization system
US11676319B2 (en) 2018-08-31 2023-06-13 Snap Inc. Augmented reality anthropomorphtzation system
US11450050B2 (en) 2018-08-31 2022-09-20 Snap Inc. Augmented reality anthropomorphization system
US12063423B1 (en) * 2018-09-24 2024-08-13 Nova Modum Inc Enhanced interactive web features for displaying and editing digital content
US11455082B2 (en) 2018-09-28 2022-09-27 Snap Inc. Collaborative achievement interface
US11704005B2 (en) 2018-09-28 2023-07-18 Snap Inc. Collaborative achievement interface
US12105938B2 (en) 2018-09-28 2024-10-01 Snap Inc. Collaborative achievement interface
WO2020070655A1 (en) * 2018-10-05 2020-04-09 Qua Qua Experiences Pvt. Ltd. System and method for creating personalized story
US11799811B2 (en) 2018-10-31 2023-10-24 Snap Inc. Messaging and gaming applications communication platform
US10567906B1 (en) * 2018-11-28 2020-02-18 International Business Machines Corporation User adapted location based services
US11159911B2 (en) 2018-11-28 2021-10-26 International Business Machines Corporation User adapted location based services
US11812335B2 (en) 2018-11-30 2023-11-07 Snap Inc. Position service to determine relative position to map features
US11558709B2 (en) 2018-11-30 2023-01-17 Snap Inc. Position service to determine relative position to map features
US11698722B2 (en) 2018-11-30 2023-07-11 Snap Inc. Generating customized avatars based on location information
US11199957B1 (en) 2018-11-30 2021-12-14 Snap Inc. Generating customized avatars based on location information
US12153788B2 (en) 2018-11-30 2024-11-26 Snap Inc. Generating customized avatars based on location information
US12213028B2 (en) 2019-01-14 2025-01-28 Snap Inc. Destination sharing in location sharing system
US11877211B2 (en) 2019-01-14 2024-01-16 Snap Inc. Destination sharing in location sharing system
US12192854B2 (en) 2019-01-16 2025-01-07 Snap Inc. Location-based context information sharing in a messaging system
US11751015B2 (en) 2019-01-16 2023-09-05 Snap Inc. Location-based context information sharing in a messaging system
US11693887B2 (en) 2019-01-30 2023-07-04 Snap Inc. Adaptive spatial density based clustering
US11294936B1 (en) 2019-01-30 2022-04-05 Snap Inc. Adaptive spatial density based clustering
US11972529B2 (en) 2019-02-01 2024-04-30 Snap Inc. Augmented reality system
US11809624B2 (en) 2019-02-13 2023-11-07 Snap Inc. Sleep detection in a location sharing system
US11954314B2 (en) 2019-02-25 2024-04-09 Snap Inc. Custom media overlay system
US11500525B2 (en) 2019-02-25 2022-11-15 Snap Inc. Custom media overlay system
US11574431B2 (en) 2019-02-26 2023-02-07 Snap Inc. Avatar based on weather
US11301117B2 (en) 2019-03-08 2022-04-12 Snap Inc. Contextual information in chat
US11868414B1 (en) 2019-03-14 2024-01-09 Snap Inc. Graph-based prediction for contact suggestion in a location sharing system
US12141215B2 (en) 2019-03-14 2024-11-12 Snap Inc. Graph-based prediction for contact suggestion in a location sharing system
US11852554B1 (en) 2019-03-21 2023-12-26 Snap Inc. Barometer calibration in a location sharing system
US12210725B2 (en) 2019-03-28 2025-01-28 Snap Inc. Generating personalized map interface with enhanced icons
US11249614B2 (en) 2019-03-28 2022-02-15 Snap Inc. Generating personalized map interface with enhanced icons
US11740760B2 (en) 2019-03-28 2023-08-29 Snap Inc. Generating personalized map interface with enhanced icons
US11361493B2 (en) 2019-04-01 2022-06-14 Snap Inc. Semantic texture mapping system
US12039658B2 (en) 2019-04-01 2024-07-16 Snap Inc. Semantic texture mapping system
US12160792B2 (en) 2019-05-30 2024-12-03 Snap Inc. Wearable device location accuracy systems
US11206615B2 (en) 2019-05-30 2021-12-21 Snap Inc. Wearable device location systems
US11606755B2 (en) 2019-05-30 2023-03-14 Snap Inc. Wearable device location systems architecture
US11785549B2 (en) 2019-05-30 2023-10-10 Snap Inc. Wearable device location systems
US12207199B2 (en) 2019-05-30 2025-01-21 Snap Inc. Wearable device location systems
US11963105B2 (en) 2019-05-30 2024-04-16 Snap Inc. Wearable device location systems architecture
US11601783B2 (en) 2019-06-07 2023-03-07 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11917495B2 (en) 2019-06-07 2024-02-27 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11714535B2 (en) 2019-07-11 2023-08-01 Snap Inc. Edge gesture interface with smart interactions
US12147654B2 (en) 2019-07-11 2024-11-19 Snap Inc. Edge gesture interface with smart interactions
US20220337945A1 (en) * 2019-08-14 2022-10-20 Harman International Industries, Incorporated Selective sound modification for video communication
US20230360098A1 (en) * 2019-08-29 2023-11-09 Capital One Services, Llc Methods and systems for providing information about a location with image analysis
US11741520B2 (en) * 2019-08-29 2023-08-29 Capital One Services, Llc Methods and systems for providing crowd information and repeat information related to an establishment using digital image information
US11821742B2 (en) 2019-09-26 2023-11-21 Snap Inc. Travel based notifications
US11218838B2 (en) 2019-10-31 2022-01-04 Snap Inc. Focused map-based context information surfacing
US11429618B2 (en) 2019-12-30 2022-08-30 Snap Inc. Surfacing augmented reality objects
US11977553B2 (en) 2019-12-30 2024-05-07 Snap Inc. Surfacing augmented reality objects
US11128715B1 (en) 2019-12-30 2021-09-21 Snap Inc. Physical friend proximity in chat
US11893208B2 (en) 2019-12-31 2024-02-06 Snap Inc. Combined map icon with action indicator
US11343323B2 (en) 2019-12-31 2022-05-24 Snap Inc. Augmented reality objects registry
US11943303B2 (en) 2019-12-31 2024-03-26 Snap Inc. Augmented reality objects registry
US11228551B1 (en) 2020-02-12 2022-01-18 Snap Inc. Multiple gateway message exchange
US11888803B2 (en) 2020-02-12 2024-01-30 Snap Inc. Multiple gateway message exchange
US11516167B2 (en) 2020-03-05 2022-11-29 Snap Inc. Storing data based on device location
US11765117B2 (en) 2020-03-05 2023-09-19 Snap Inc. Storing data based on device location
US11619501B2 (en) 2020-03-11 2023-04-04 Snap Inc. Avatar based on trip
US11915400B2 (en) 2020-03-27 2024-02-27 Snap Inc. Location mapping for large scale augmented-reality
US11430091B2 (en) 2020-03-27 2022-08-30 Snap Inc. Location mapping for large scale augmented-reality
US11776256B2 (en) 2020-03-27 2023-10-03 Snap Inc. Shared augmented reality system
US11483267B2 (en) 2020-06-15 2022-10-25 Snap Inc. Location sharing using different rate-limited links
US11314776B2 (en) 2020-06-15 2022-04-26 Snap Inc. Location sharing using friend list versions
US11503432B2 (en) 2020-06-15 2022-11-15 Snap Inc. Scalable real-time location sharing framework
US11290851B2 (en) 2020-06-15 2022-03-29 Snap Inc. Location sharing using offline and online objects
US11676378B2 (en) 2020-06-29 2023-06-13 Snap Inc. Providing travel-based augmented reality content with a captured image
US12062235B2 (en) 2020-06-29 2024-08-13 Snap Inc. Providing travel-based augmented reality content with a captured image
US11943192B2 (en) 2020-08-31 2024-03-26 Snap Inc. Co-location connection service
US11606756B2 (en) 2021-03-29 2023-03-14 Snap Inc. Scheduling requests for location data
US11601888B2 (en) 2021-03-29 2023-03-07 Snap Inc. Determining location using multi-source geolocation data
US11902902B2 (en) 2021-03-29 2024-02-13 Snap Inc. Scheduling requests for location data
US20220319124A1 (en) * 2021-03-31 2022-10-06 Snap Inc. Auto-filling virtual content
US11645324B2 (en) 2021-03-31 2023-05-09 Snap Inc. Location-based timeline media content system
US11234548B1 (en) * 2021-04-05 2022-02-01 Ray Franklin Interchangeable seasonal holiday decorative method and devices
US12026362B2 (en) 2021-05-19 2024-07-02 Snap Inc. Video editing application for mobile devices
US12166839B2 (en) 2021-10-29 2024-12-10 Snap Inc. Accessing web-based fragments for display
US11829834B2 (en) 2021-10-29 2023-11-28 Snap Inc. Extended QR code
US12001750B2 (en) 2022-04-20 2024-06-04 Snap Inc. Location-based shared augmented reality experience system
US12164109B2 (en) 2022-04-29 2024-12-10 Snap Inc. AR/VR enabled contact lens
US12020384B2 (en) 2022-06-21 2024-06-25 Snap Inc. Integrating augmented reality experiences with other components
US12020386B2 (en) 2022-06-23 2024-06-25 Snap Inc. Applying pregenerated virtual experiences in new location
US12223156B2 (en) 2022-12-09 2025-02-11 Snap Inc. Low-latency delivery mechanism for map-based GUI
US12225095B2 (en) 2023-03-27 2025-02-11 Snap Inc. Messaging achievement pictograph display system

Also Published As

Publication number Publication date
US10136260B2 (en) 2018-11-20
US20160255477A1 (en) 2016-09-01

Similar Documents

Publication Publication Date Title
US10136260B2 (en) 2018-11-20 Selectively providing mobile experiences at multiple locations
US8405740B2 (en) 2013-03-26 Guidance for image capture at different locations
US10142795B2 (en) 2018-11-27 Providing digital content for multiple venues
US8675112B2 (en) 2014-03-18 Imaging device providing capture location guidance
US10929463B1 (en) 2021-02-23 Arranging location based content for mobile devices
US11323605B2 (en) 2022-05-03 Method and apparatus for managing a camera network
US20130191211A1 (en) 2013-07-25 Customizing printed products based on travel paths
US20120327257A1 (en) 2012-12-27 Photo product using images from different locations
US9247306B2 (en) 2016-01-26 Forming a multimedia product using video chat
US8156206B2 (en) 2012-04-10 Contextual data communication platform
US20120311623A1 (en) 2012-12-06 Methods and systems for obtaining still images corresponding to video
US20180089898A1 (en) 2018-03-29 Augmented reality and virtual reality location-based attraction simulation playback and creation system and processes for simulating past attractions and preserving present attractions as location-based augmented reality and virtual reality attractions
US10909474B2 (en) 2021-02-02 Triggering an automatic creation of an event stamp
WO2012033506A1 (en) 2012-03-15 Video communication system and method for using same
KR102166540B1 (en) 2020-11-04 Platform service system for providing MR graphics support contents and Drive method of the same
US11455334B2 (en) 2022-09-27 Method and system for collecting, and globally communicating and evaluating, digital still and video images of sports and event spectators, including augmented reality images from entertainment and venues
US10372752B1 (en) 2019-08-06 Method and system for collecting, and globally communicating and evaluating, digital still and video images of sports and event spectators, including augmented reality images from entertainment and gathering venues
Mishra et al. 2014 A beginner's guide to mobile marketing
US20160021042A1 (en) 2016-01-21 Video messaging
US20200364621A1 (en) 2020-11-19 Grid card (or geo tag)
AU2012301503B2 (en) 2016-12-08 Sentient environment
US20110076993A1 (en) 2011-03-31 Video communication system and method for using same
JP2018163650A (en) 2018-10-18 Apparatus for using copyrighted work, method for using copyrighted work, and data structure therefor
O'Farrell et al. 2008 Mobile internet for dummies
Kohanchuk 2018 Mobile sales technologies of tour operator Coral Travel based on LBS-systems

Legal Events

Date Code Title Description
2014-03-20 AS Assignment

Owner name: ADESIGNEDPATH FOR USABILITYSOLUTIONS, LLC, NEW YOR

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:O'KEEFE, BRIAN JOSEPH;PARULSKI, KENNETH ALAN;MOORE, LESLIE GERALD, JR.;REEL/FRAME:032481/0154

Effective date: 20140319

2016-08-17 STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

2019-03-23 AS Assignment

Owner name: TOURBLEND INNOVATIONS, LLC, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADESIGNEDPATH FOR USABILITYSOLUTIONS, LLC;REEL/FRAME:050454/0117

Effective date: 20160206