Monday, February 1, 2016

Progress Report Due: 1/31/16

Progress:
Tasks Accomplished:
By the end of the STEM Hackathon, our app's camera successfully masked what it saw, singling out only the tracked color red (since red is the color implemented in our code right now).
We tested the camera out on a red solo cup and the "3" Train's subway sign and got the following results:



Problem:
Difficulties Encountered:
-It has taken a few weeks to resolve this issue. Originally, we had tried to convert what the camera saw from BGR to HSV. What we realized at the Hackathon though is that we could have kept Mr. Lin's original code that allowed the camera to see through binary image.

Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
-Red Objects (Solo Cup, Subway Sign, etc.)
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
-Get our application to track more colors (green, blue) and start to focus on how we can decipher between subway symbols that hold the same color yet stand for different trains (Ex: the 1,2,3, line shares the same color, red), this could mean focusing on text recognition.
Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Hopefully we will have implemented all colors we want our application to be able to identify and have started text recognition in the coming week.

Thursday, January 28, 2016

Progress Report Due: 1/24/16

Progress:

*No progress report was posted the week of 1/17 since the week was a presentation week for Progress Reports.
Tasks Accomplished:
- Last week we began implementing a code into our app that converts what the camera sees from BGR to HSV vision. What the code is supposed to do, is mask what the camera sees, leaving one specific color that has been tracked, singled out, and all surrounding objects blacked. We plan to work with Mr. Lin in the upcoming STEM Seminar during Regents Week to complete this task. 
Problem:
Difficulties Encountered:
-Our project on Xcode has been able to build itself, but it's still not accomplishing what we want it to do (stated above).
Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
-Get our application to mask what it sees, singling out the color of our choice (whichever color(s) we place in the code).
- Alter the app so that it will detect the color of our choice so we can begin testing it out on subway signs.
Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Have the code successfully mask what the camera sees by the conclusion of our STEM Seminar. Hopefully we will have implemented all colors we want our application to be able to identify by the end of tomorrow's seminar as well.

Monday, January 18, 2016

Progress Meeting II Materials (1/12/16)

Slides:

https://docs.google.com/presentation/d/1HoOAw7tc3PbIurVUCWAieI4PzeHyTBee_94TQlBtrPo/edit#slide=id.p

Gantt Chart:

https://drive.google.com/drive/my-drive

Sunday, January 10, 2016

Progress Report Due: 1/10/16

Progress:
Tasks Accomplished:
-We have successfully implemented a code into our app that converts what the camera sees from BGR to HSV vision, aside from a few minor errors that can be solved within the coming week. Essentially, we changed the default "BGR to Grayscale" of Mr. Lin's code, so that the camera would take footage and alter it from "BGR to HSV."
Questions Answered:
-How we will have our app single out certain objects. Converting from BGR to HSV allows for the app's camera to track objects by extracting whichever colors we implement in our code.
Lessons Learned:
-We won't be able to find an exact code for each capability we want our app to have, but rather we can take bits and pieces and learn from them to create what we need to accomplish.
Problem:
Difficulties Encountered:
-Part of an online code that worked well with our app (http://docs.opencv.org/master/df/d9d/tutorial_py_colorspaces.html#gsc.tab=0) was not compatible with our app. This caused us to have to go in and alter the code to meet our needs (which we had expected and expect to do in the future).
Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad & Chord
Show Stopper/Open Issues/New Risks: 
- The code we have been working off of is configured to identify red objects. We have to look more into ranges of color features (such as hue/saturation/etc.) which is where HSV plays in.
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
- Fix any minor malfunctions that occurred while switching the app from "BGR to Grayscale" to "BGR to HSV".
- Alter the app so that it will detect the color of our choice so we can begin testing it out on subway signs.
Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Continue to make sure our app works off of Mr. Lin's iPad as we add new code.
-Detect different colored objects as we alter our code to make sure the camera is tracking the correct objects.

Monday, December 21, 2015

Progress Report Due: 12/20/15

  • Progress:
Tasks Accomplished:
-Identifying our scope (wanting to recognize subway signs through numbers rather than other features such as color).
Questions Answered:
-How we'd recognize the subway signs (through color or through number recognition). We've decided to go with number recognition.
Lessons Learned:
-It is important we narrow down exactly how we will go about completing our project. From there we can find tutorials/source codes/etc. offline that specifically pertain to our project. It is too late in the year to still be experimenting.
Problem:
Difficulties Encountered:
-Finding a code that recognizes numbers while also includes a camera implantation.
Equipments Required: 
-Mac Laptops
-Mr. Lin's iPad
Show Stopper/Open Issues/New Risks: 
-Many source codes we have found so far are for license plate recognition. These could be beneficial although it is important we identify where and how the code could be specified toward subway signs rather than license plates. There may not be a differentiation in code if just the numbers are being detected anyways.
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
- Find a source code that recognizes numbers and successfully implement it into our app. Also including a camera feature since the physical recognition of numbers cannot be down without the camera.
Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Continue to make sure our app works off of Mr. Lin's iPad as we add new code.
-We are working off of a license plate tutorial's source code right now that seems to be beneficial to our app. During Tuesday's class period we will see if it is compatible with the iPad.

Sunday, December 6, 2015

RE: Patent Search Due: 11/30/15

You have come up a very nice list of patents relevant to your research. Your comments about each patents are also very valuable. If you can analyze the patents by grouping them into meaningful categories, and identify the trends of all the patents over time. It will help us get a glance of how technology has been evolved in this field, and it may also help you to identify the possible areas of potential innovations.  

Progress Report Due: 12/6/15



  • Progress:
Tasks Accomplished:
-We have found a tutorial that includes source code for an object detection app. The application does this by using MSER (Maximally Stable Extremal Regions) in iOS. The app is very simplistic and only recognizes the Toptal's (the developer company) logo. It does so without having to take a picture or record a video, as we wanted our app to be able to do. (http://www.toptal.com/machine-learning/real-time-object-detection-using-mser-in-ios)
Problems Solved:
-We were worried we'd have trouble creating an object recognition app that didn't require a picture to be taken or video to be recorded. Luckily, we found an application that recognizes objects without having to do so.
Questions Answered:
-How to recognize objects with a camera. How to do so without taking a picture or recording a video.
Lessons Learned:
-It is important we understand what each code and tutorial we follow actually does. We copy and paste so many codes to see how and if they'll benefit our app, without actually knowing what they do. Time can be wasted if we are using codes that may not be beneficial to our app at all. It may be better from now on to learn and wrap our heads around certain codes before adding them into our Xcode Project.
  • Problem:
Difficulties Encountered:
-We are in the process of implementing the Toptal source code for object recognition into our app. Although, we are trying to find and figure out where in the code we can change the app so that the camera will be recognizing the objects we tell it to (the app can only identify the Toptal logo right now and we need to change the code so that it can recognize numbers in subway signs).
Missing Information: None.
Equipments Required: 
-Mac Laptops, Mr. Lin's iPad.
Materials Missed: 
-We are not lacking of any materials as of now.
Show Stopper/Open Issues/New Risks: 
-Not being able to adjust the code to fit our needs (recognizing train numbers/letters on subway signs).
Personal Problems: None as of now.
Schedule Conflicts: 
-None as of now. Whenever we cannot meet in person we communicate via Facebook to let one another know of new discoveries outside of classtime.
  • Plan:
Proposals/Steps to Attack the Problems/Action Items for the Coming Week: 
-Search the code to see where the Toptal logo is implemented and change the app so it can read numbers and letters.
Action Items for the Coming Week:
-In addition to altering the code, we want to understand what the code actually does. What information does it provide once objects are identified? How does the app transform the object to a readable form?

Experiments to Conduct, Ideas to Try, Vendors to Contact, Updated Schedule, etc.
-Continue to make sure our app works off of Mr. Lin's iPad as we add new code.