Patent Search
Patents-
- Systems
and methods for image recognition using mobile devices.
Types, Trends, Scope: This app is similar to many of the
patents that follow below, since it deals with having a premade processing
library where certain chunks of a captured image can be searched for in this library
and matched with the piece in the engine that brings forth necessary
information. In our application, we could do something similar. Instead of
detecting subway signs via color, we can do so through chunks of subway signs
that vary from one to another.
Publication #: US20100260426A1
Patent Citation: US8520979B2
Int’l Classification: G06K9/20
- Methods
and arrangements for identifying objects.
Types,
Trends, Scope: This app attempts to process images by detecting watermarks that
stand in place of a certain object. For instance, on a box
at checkout that may hold a lamp, rather than the app’s camera trying to
recognize the lamp as an object, it recognizes a watermark on the box that
stands in place of the lamp. We could use such device in our app by marking the
subway signs with watermarks that make the camera implemented in our app easier
to identify.
Publication #: US 20140052555 A1
Patent Citation: 2014/0002643
Int’l Classification: G06K 15/00 (20060101)
- Augmented
reality panorama supporting visually impaired individuals.
Types, Trends, Scope:
In this patent, the means through which this application operates,
revolves around object identification through use of a
content library (similar to how in
our project we are using OpenCV). Essentially, the
application hopes to identify objects in the area of which the devices’ camera
focuses, through object recognition backed by an image library. The main
difference between this invention and what our group hopes to create, is that
ours is specific to subway navigation, whereas this app is more vague without a
particular specification.
Publication #: US8605141B2
Patent Citation: WO2011053732A2
Int’l Classification: A61F9/08
- Smartphone-Based
Methods and Systems.
Types,
Trends, Scope: Though this app is not specified to the visually impaired, it
revolves around object detection and processing. For
instance, the app allows users to scan a barcode from their device's camera.
Once the app determines what the barcode stands for, from a pre-made library,
then users are brought to a new screen which allows them to make use of such
information. They can take actions such as finding the object’s price, finding
where else such item is sold and so forth. We could similarly use such concept
on our app. Maybe our app could be in accordance with the MTA’s website so
times and train delays that pertain to the detected train are read aloud once
the camera recognizes its symbol off the sign.
Publication #: US20120116559A1
Patent Citation: US20090204640A1
Int’l Classification: G06K9/00671
- Method
for traffic sign detection.
Types,
Trends, Scope: This patent identifies traffic signs by taking a video (which
consists of multiple image frames). Once the video has been
recorded, the app goes back through the recorded data and molds together what
has been detected, taking parts from each image to put together the object.
This way, the user spends less time trying to focus the camera on their object
of interest and the device focuses it for them. This app relates back to ours
since a problem we’ve encountered is how the user will get the camera to focus
on their object of interest. The user cannot just move around the device,
hoping the camera will detect what they want it to (how will it know to detect
the subway sign? an arrow? a wall tile? etc.) Therefore, we could go about
solving this problem in the way this app did.
Publication #: US7466841 B2
Patent Citation: US20080069400
Int’l Classification: G06K9/00
- Video
recognition system.
Types,
Trends, Scope: Though this patent is not highly detailed with how objects are
recorded, the physical processing is highly detailed. This
application has a set amount of images stored in a library and once an image is
captured, the image from the reference library that resembles the image
captured the best, will bring forth the information users originally wanted to
know of such object in the first place. This is similar to our app since we are
using OpenCV as our image processing library.
Publication #: US4611347 A
Patent Citation: US4288782
Int’l Classification: G06T7/00
- Use
of association of an object detected in an image to obtain information to
display to a user.
Types,
Trends, Scope: This application captures an image. From the image captured, the
app separates the captured image into different sections. A
section that is also present in the library of the app, is the section of the
captured image that will be recorded. We could use this in our app since so far
we hope to identify objects based on the color of the symbols in the sign.
Although, we’ve come across the question as to what if there are many objects
within range of the camera that appear the same color as the particular sign we
want it to detect?
Publication #: US20130044912A1
Patent Citation: US20130285894A1
- System
and method for video recognition based on visual image matching.
Types,
Trends, Scope: In this app, an image or multiple images are taken and sent to a
server where the object in the picture is identified and
analyzed and sent back to the user. In our app we hope to detect objects
(although not through use of images or videos since we won’t need to be doing
any physical recording, just detecting with the camera) and relay information
back through VoiceOver. This app seems to do similar, without the use of
VoiceOver, but rather through means of text. We can possibly insert a similar
server into our project (one that is quick and compatible though with visually
impaired users).
Publication #: US8805123 B2
- Real-time
3d computer vision processing engine for object recognition,
reconstruction, and analysis.
Types, Trends, Scope: This invention relates to the computer
vision aspect of our application. The invention takes fragmented images and
molds together a three-dimensional prototype of the image using a reference of
models stored in the device. The invention can do such thing by taking features
of the captured images (such as measurements, color, etc.). Possibly we could
detect objects on our app through measurements rather than color, if color ends
up failing.
Publication #: WO2015006224 A1
- Detecting
and processing corrupted video recordings.
Types,
Trends, Scope: The invention can be used to determine if recorded video content
is corrupted. Though the means through which the device does
so is unclear, but we know such programming could be useful in our app. How
will our app detect an object if the user is being bumped around in the subway
crowd and can’t hold the camera still? Will they have to stand there just for
the camera to not be able to detect the symbol on the sign? Possibly the app
can relay through VoiceOver such error and tell the user to try again.
Publication #: US8917646 B2
- Device
for detecting/processing data for video/audio signals.
Types,
Trends, Scope:
Publication #: WO2005071559 A3
Patent Citation: Not listed.
- Rfid-based
object tracking for the visually impaired.
Types, Trends, Scope: This invention is very neat and
relates to Team 1’s idea of using markers to navigate their drones around a
room. This invention uses RFID-based object tracking through use of UHF RFID
tags that are located on objects throughout a household room, detecting fixed,
semi-static and dynamic objects. A similar tactic could be used in our project
in order for the camera of the app to know which items to detect.
Publication #: WO2009063114 A1
- Sound
signal system for a blind person using gps and a smart phone application.
Types, Trends, Scope: Similar to a button pedestrians can
push on a light pole when encountering a crosswalk, this inventions allows
users to take out their phone, open this gps app which coordinates with a
signal inside the light pole. The app via VoiceOver reads aloud to the light
pole that a pedestrian needs to walk across and the light pole takes on the
signal and performs operations as it would had someone who was not visually
impaired pressed the button.
Publication #: WO2013172516 A1
- Systems
and methods for a voice- and gesture-controlled mobile application
development and deployment platform.
Types,
Trends, Scope: This patent is a software that allows users to create apps
through
“voice or gesture interactions.” This can be compared to a
computer application like XCode, the software we are working off of. Though we
plan to continue using XCode, we should definitely consider transferring our
work to this application, if we have trouble configuring VoiceOver on XCode.
Publication #: US8898630 B2
- Platform
for recognizing text using mobile devices with a built-in device video
camera and automatically retrieving associated content based on the
recognized text Types, Trends, Scope: Although
this app specifies in text recognition, it does so through use of a video
camera that has been implemented into the app, without actually having to
take a video (which is what we hope our app will do). As the app sees
text, it recognizes words and factors such as discounts, reviews and so
forth pop up for the user to scroll through. Instead of recognizing
symbols via color as our original intentions were for our app, it may be
better for us to have it recognize text instead (i.e. “the 4 train”, “uptown/downtown”). Publication #:
US20140111542A1 Patent
Citation: US20120092329A1
- Mobile
web map service system and terminal providing mobile web map service.
Types, Trends, Scope: Though this app doesn’t necessarily
meet needs for the visually impaired, it is an app that deals with navigation
within a terminal that is connected to a terminal server which relays the
location of the user, back to the user. Our app could attempt a similar
navigational system, although we would need to add in VoiceOver so that the
location could be relayed back to the user, meeting visually impaired needs.
Publication #: US20120159357 A1
- Real
time object scanning using a mobile phone and cloud-based visual search
engine.
Types,
Trends, Scope: The app takes a picture which then is cut down to a key frame.
The
key frame is then sent to a cloud server where it’s meaning
is searched for in an image searcher (similar to a google engine whereas
instead of inserting text, an image is searched). This app relates to ours
since we may need to tag one key element of the subway signs (a color for
instance) that can be searched and matched with a train’s name. This though is
probably beyond our capability for our app to do by June.
Publication #: WO2014126735 A1
- Color
identification device.
Types,
Trends, Scope: This app acts as the building block of ours in a sense. Through
identification of pixels, the app recognizes color in
objects, reading back the colors detected via sound to the user. We want to
specialize this concept down to using color to differentiate one subway sign
(or symbol of a sign) from another.
Publication #: WO2005085781 A1
Patent Citation: JP2002022537A
Int’l Classification: G01J3/46
- User
interface for software applications.
Types,
Trends, Scope: This app allows users to use apps that are not specially made
for
the visually impaired, via tactile feedback, having certain
vibrations account for certain operations. We do not plan to use tactile
feedback. Although, if we do decide to add additional user interface screens on
our app (possibly a screen providing options of the detected train sign;
delays, construction, etc.) , tactile feedback would be the way to do so
alongside VoiceOver.
Publication #: US8201090 B2
Patent Citation: US20060256090
Int’l Classification: G06F3/00
- Aid
for visually impaired people.
Types,
Trends, Scope: This app’s foundation shares the same one we hope to accomplish,
though goes about doing so in a manner a bit different from
ours. The app consists of a library of navigational features that can be
identified in pre-captured image. Then when the user themself takes a picture
with the camera implemented in the app, the photo’s features are matched with
that of the library’s. Then, the location and a set of directions is brought up
for the user. We want to do something similar, although we are not sure if
directions will be read aloud back to the user. Thus far, we plan to have our
app just identify the subway sign and read back which train is where (i.e. the
4 train is downstairs). Additional navigation though such as the type this app
has, will probably be necessary.
Publication #:
WO2014122420 A1
Patent Citation:
US20120327203
Int’l Classification:
G06K9/00