Monday, February 29, 2016

Progress Report Due: 2/28/16

Progress Report Due: 2/28/16

Progress:
Focus/Problem:
This week Mr. Lin introduced an online pamphlet to us that contains an algorithm for circle detection. After talking, we decided alongside Mr. Lin that since we've had difficulty with text recognition we will start with circle detection. Doing so will mask what the camera sees aside from the circles it detects. With our app being able to detect both color and circles, we are making the area in which the camera would be able to track text, hopefully easier to detect since the range in which it would be searching for text would be much smaller.
Target Items for the Upcoming Week:
Complete circle detection in order to go back and direct our focus on text recognition. While circle detection will be beneficial to us if it succeeds as we intend, if we can implement text recognition we can start working with VoiceOver (to relay the detected signs back to the app user).

Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
-Large numbers/ circles to verify the app recognizes what we want it to recognize (A variation of circles and numbers, some that are not red obviously, in order to make sure it's the text recognition that is detecting the numbers not just the red color recognition).

Progress Report Due 2/14/16

Progress Report Due: 2/14/16

Progress
Point of focus: While we have masked the camera's focus to see only the color red, we are trying to get the camera to recognize/track numbers and characters in order to further decipher between subway sign circles that have the same color yet stand for seperate trains (i.e. the 4,5,6 line is all in green yet the trains are different).
Problem:
Difficulties Encountered/Plans for the Coming Week
While searching for an optical character recognition software that enables recognition within coding, many of the softwares are only compatible with softwares like Tesseract, not XCode. We will have to detect the algorithms and alter them ourselves in order for them to suit our XCode platform.
-Tackle the way through which we want to recognize text (through altering an algorithm, finding contours, etc.)
Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
-Numbers/Text (some red, some not in order for us to verify it's the text recognition that is identifying the numbers not solely the red color detection).

Monday, February 8, 2016

Progress Report Due: 2/7/16

Progress Report Due: 2/7/16

Progress:
Tasks Accomplished:
Now that our app's camera can mask what it sees, in order to just see particular colors, we are now focused on finding factors of similar colored items that differ from one another; numbers. This past week Leslie successfully outlined objects of our app's camera, which is the first step in many number recognition algorithms. Over the weekend, Kate began a tutorial (http://docs.opencv.org/2.4.2/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html) for finding contours in OpenCV. "Finding contours" Is the first step in an algorithm she has chosen to follow for detecting numbers (http://stackoverflow.com/questions/10776398/extracting-numbers-from-an-image-using-opencv being one of the places such algorithm is found). 
Problem:
Difficulties Encountered/Plans for the Coming Week
As stated above, this past week Leslie successfully outlined objects of our app's camera, which is the first step in many number recognition algorithms. Although, this was done by taking out the masking of what our camera sees aside from the color red. In the coming week it is important we work on implementing both codes simultaneously into our app.
-Complete Number Recognition before break.
-Do more research on how our app plans to internally process the numbers it detects and how it plans to use such information.
Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
-Red Objects (Solo Cup, Subway Sign, etc.)

Monday, February 1, 2016

Progress Report Due: 1/31/16

Progress:
Tasks Accomplished:
By the end of the STEM Hackathon, our app's camera successfully masked what it saw, singling out only the tracked color red (since red is the color implemented in our code right now).
We tested the camera out on a red solo cup and the "3" Train's subway sign and got the following results:



Problem:
Difficulties Encountered:
-It has taken a few weeks to resolve this issue. Originally, we had tried to convert what the camera saw from BGR to HSV. What we realized at the Hackathon though is that we could have kept Mr. Lin's original code that allowed the camera to see through binary image.

Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
-Red Objects (Solo Cup, Subway Sign, etc.)
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
-Get our application to track more colors (green, blue) and start to focus on how we can decipher between subway symbols that hold the same color yet stand for different trains (Ex: the 1,2,3, line shares the same color, red), this could mean focusing on text recognition.
Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Hopefully we will have implemented all colors we want our application to be able to identify and have started text recognition in the coming week.

Thursday, January 28, 2016

Progress Report Due: 1/24/16

Progress:

*No progress report was posted the week of 1/17 since the week was a presentation week for Progress Reports.
Tasks Accomplished:
- Last week we began implementing a code into our app that converts what the camera sees from BGR to HSV vision. What the code is supposed to do, is mask what the camera sees, leaving one specific color that has been tracked, singled out, and all surrounding objects blacked. We plan to work with Mr. Lin in the upcoming STEM Seminar during Regents Week to complete this task. 
Problem:
Difficulties Encountered:
-Our project on Xcode has been able to build itself, but it's still not accomplishing what we want it to do (stated above).
Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
-Get our application to mask what it sees, singling out the color of our choice (whichever color(s) we place in the code).
- Alter the app so that it will detect the color of our choice so we can begin testing it out on subway signs.
Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Have the code successfully mask what the camera sees by the conclusion of our STEM Seminar. Hopefully we will have implemented all colors we want our application to be able to identify by the end of tomorrow's seminar as well.

Monday, January 18, 2016

Progress Meeting II Materials (1/12/16)

Slides:

https://docs.google.com/presentation/d/1HoOAw7tc3PbIurVUCWAieI4PzeHyTBee_94TQlBtrPo/edit#slide=id.p

Gantt Chart:

https://drive.google.com/drive/my-drive

Sunday, January 10, 2016

Progress Report Due: 1/10/16

Progress:
Tasks Accomplished:
-We have successfully implemented a code into our app that converts what the camera sees from BGR to HSV vision, aside from a few minor errors that can be solved within the coming week. Essentially, we changed the default "BGR to Grayscale" of Mr. Lin's code, so that the camera would take footage and alter it from "BGR to HSV."
Questions Answered:
-How we will have our app single out certain objects. Converting from BGR to HSV allows for the app's camera to track objects by extracting whichever colors we implement in our code.
Lessons Learned:
-We won't be able to find an exact code for each capability we want our app to have, but rather we can take bits and pieces and learn from them to create what we need to accomplish.
Problem:
Difficulties Encountered:
-Part of an online code that worked well with our app (http://docs.opencv.org/master/df/d9d/tutorial_py_colorspaces.html#gsc.tab=0) was not compatible with our app. This caused us to have to go in and alter the code to meet our needs (which we had expected and expect to do in the future).
Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
Show Stopper/Open Issues/New Risks: 
- The code we have been working off of is configured to identify red objects. We have to look more into ranges of color features (such as hue/saturation/etc.) which is where HSV plays in.
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
- Fix any minor malfunctions that occurred while switching the app from "BGR to Grayscale" to "BGR to HSV".
- Alter the app so that it will detect the color of our choice so we can begin testing it out on subway signs.
Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Continue to make sure our app works off of Mr. Lin's iPad as we add new code.
-Detect different colored objects as we alter our code to make sure the camera is tracking the correct objects.