Tuesday, April 19, 2016

Progress Report Due: 4/17/16

While Leslie finishes up normalizing the frame (through dividing the number of pixels by either the diameter or the radius), I have started research on VoiceOver within applications. We hope to focus on VoiceOver once the frame is normalized. As of now we have decided to move past textual recognition, moving it to the future (if our app still needs it then), once we implement (or at least gain a better understanding of) VoiceOver.

Thursday, April 14, 2016

Progress Report Due: 4/10/16

Progress:

Focus/Problem:

This week, our project made progress regarding circle detection, as mentioned in the earlier post. Now, the application's camera masks every portion of the frame except for the detected circle and the area within it. White pixels trace the circumference of the circle and an additional white pixel detects the center of the circle.


Target Items for the Upcoming Week:


While progress was made, we now plan to focus on stabilizing the detection. Upon the camera detecting the circle, the frame appears in and out rapidly. Mr. Lin suggested we go about this stabilization by dividing the number of pixels of the edge by the radius or the diameter of the circle (since edge detection is a linear feature and one-dimensional).



Thursday, April 7, 2016

Circle Detection Progress

Summary:
Today in class, the circle detection FINALLY started to work to our advantage. When we started working with circle detection, our goal was for our application's camera to detect circles within view and block out all other objects the camera sees. As of today, our application can do so! Although, we do have to fix minor errors (stabilizing the camera's focus on the circle for instance).

This is the portion of code that was altered in class today:
 // if(Newimage.empty()){
                            cv::Mat mask = cv::Mat::zeros( image.rows, image.colsCV_8UC1 );
                        circle( mask, center, radius, Scalar(225,205,205), -180 ); //-1 means filled
                        image.copyTo( dst, mask ); // copy values of img to dst if mask is > 0.
                            image=dst;


Outcome:






Tuesday, April 5, 2016

Progress Report Due: 4/3/16

*Presentation week.*

Notes from other presentations:

Team 2:
-Halt in compound eye simulation frame.
-Artificial Intelligence that learns from past experiences. (Similar to statistics and data mining).
-Machine Learning method he is using: Regression Basics (Using a data set to predict values through line of best fit).
 Residual sum of squares (RSS): Difference between actual and predicted values.
-Apply regression method to object tracking, working on the drone's movement in three-dimensional space (space is captured in two dimension).
 Steps after...
 1) Integrate object tracking codes with machine learning algorithms.
 2) Create data set: Position drone around the room on a coordinate plane.
 3) Predict drone's location in a three-dimensional space.

Team 5: Hand-to-Hand Communication
 -Using EMG signals from their arms to mimic human body movements.
-Data Acquisition Recap: 1 Muscle, 2 Electrodes at a time. (Muscles completed: Extensor digitorum, extensor pollicis brevis)
-Finding the "peaks of the data": the local maximums (highest values) of each finger movement. Difficult to do when an entire signal is recorded because information is not accurate, but instead choppy.
-Graphed the peaks in excel and numbered them. (A is where range starts, B is where range ends).
-Alternate strategy to project... Xcode. Using Xcode they would put the data into a column and the software automatically detects maximums.
-With all five fingers it is still hard to differentiate between the fingers, besides the thumb.
-Use planes to separate the data possibly (alternative).

Team 8: Educational Mobile Apps for the Visually Impaired:
-Senseg Tablet: Register the Device, Set up the Environment, Open the Stand-alone SDK Manager on an existing project (in Android Studio), launch the SDK manager, download the Senseg SDK from the 'Extras' folder.
-Import Haptic feedback package from the SDK, declare context, attribute set, and content in Java format, create an if/else statement.
-Code allows for Gabi & Lola to set the level of intensity and the area of which the haptic feedback covers.
-They'd like to add edges in order to enhance the feel of haptic feedback. By defining the edges, the haptic feedback is more obvious to the user.
-Future Goals: VoiceOver, Edges (to enhance the Haptic Feedback), Circles

Team 4: Brainwave Controlled Games/Devices: OpenVIBE:
-Stages of running a successful BCI scenario: Brainwave acquisition, signal processing/filtering, feature extraction, classification, online testing.
-Brainwave Acquisition (using the NeuroSky Mindwave Headset with USB.
-Temporal Filtering (a Plugin box used to filter the input signal (raw data) to extract a specific range of data).
-Raw data is used for the separation of data based on frequency bands.
-Time Based Epoching: Generates signal 'slices' or 'blocks' having a specified duration and interval (i.e. if calculations are made every 1 second, 1 value is provided per 1 second).
-Simple DSP: Box used to apply mathematical formulas to a data set (ex. Log(1+x) = Logarithmic Bandpower)
-Classification Algorithms: Each coordinate is a point on two dimensional space (when you think left, the alpha and beta waves behave differently compared to when you think right. Points will cluster differently in two-dimensional spaces).
-SVM Anaysis: Support Vector Machine (finds the line (hyperplane) that is oriented so that the margin between the support vectors are maximized.

Team 6: The Forefront of EEG Research:
-Electroencephalography provides a new insight to how humans work.
-Invasive vs. Non-Invasive
-Invasive: Electrodes are placed within the scalp- records specific areas of interest and receives a small amount of noise, surgery is required, long & costly process.
-Non-Invasive: Electrodes are placed along the outside of the scalp, larger amount of noise and less specific, arrangement of electrodes.
-Hans Berger: Created the first non-invasive for humans.
-Brainwaves provide context for data found by users, visualizes states of mind and classification of the waves can be utilized for projects, applications, and research.
-Medical use for EEG: Provides solutions and methods to overcome mental and physical obstacles (i.e. of a mental disorder ADHD).
-Educational use for EEG: educates both yourself and those around you on the processes of the brain, helps as a learning tool for classrooms and tutoring, gathers data to get a broad picture of all brains (similar project- the Human Genome Project).
-Recreational use for EEG: The brain becomes a new kind of controller and provides opportunities for new kinds of gamers.
-Baseline Scan: Like a control group in order to see if high beta waves are your natural state for instance, in which they'll base their algorithms off of.
-Emotiv: Founded in 2011, Emotiv is a company that provided information on the Epoch headset that has contributed to her research thus far.
-The Emotiv Forum has been a place where she has found others working in the same area with questions similar to those of her own.

Team 10: M
ulti-Pixel Display Design:
-Designing a Printable Circuit Board (PCB: An electronic circuit with small strips of conducting material (copper)).
-Using a PCB as a prototype for finger sensing experiment (they need to find out size & voltage).
-In order to design the layout they have to calculate (widths/lengths of pixels, number of pads in a column) and choose shapes for experiment. Then, they draw this by creating a 12 by 12 pad with 1mm by 1mm as widths for the pixels and then erasing unnecessary connections to focus on the shapes.
-Important Factors: Touch Sensitivity (only using a fingertip to feel, control over voltage), minimum width ( 1mm by 1mm per pixel), standard trace width = 0.5 mm, with changing widths, keep same spacing length, shape (see how many pads one row can fit).
-Issues: Users might feel the electrovibration produced by the lines between the pixels (Potential Solution: Eventually they'll be able to use a double-sided PCB so that the pixels will be connected through the board instead of between pixels), the shapes may be too similar to each other to be distinctly recognizable, and users will not know where to move their fingers.
-Process: 1) Designing 2) Printing 3) Etching

Video Games for Autistic Children:
-Integrate the functionalities of a video game with specific functions that help autistic children learn certain social skills.
-Acquired Knowledge: Create website, add music files, import image files into the game, using tiled.
-Importing Images into Processing: 1) Format .PNG files into grid. 2) Assign values to each 16x16 grid space (a. for the sprites, each image is given a name) 3) Functions are called to draw each image under certain conditions. 4) The function loads images from each file sequentially. 5) When the images are drawn the size is doubled.
-Classes: Ways to create a certain function, that can be called in to draw or do something. What makes a class special is that it helps facilitate the whole coding process, due to it's unique function that can alter based upon which values you set.
-Arrays: sequences of objects of the same data type. All the elements of an array are either of type int (whole numbers), or all of them are of type char)

Image Processing to Identify Junctions & Lines:
1) A. Guiding System (Path is marked on the floor, drone uses down-facing camera)
    B. Identify Lines (I. Thresholding, II. Creating Blob Data, III. Get Blob Data (center: get location of each pixel, average the locations) (radial difference: draw radii to edge pixels, find greatest and least radii, calculate difference between the two radii) (heading: calculate angle of longest radius arctan y/x) (heading: calculate angle of longest radius arctan y/x) IV. Choose Blob with largest radial difference (turn parallel to the line, then move until directly over it).  
    C. Switching Lines
    D. Location= # of junctions passed
    E. User Sets Path at the Beginning (Path: pi/2)

Progress Report Due: 3/27/16

Progress Report Due: 3/27/16


Progress:


Focus/Problem:


We are still in the process of adding a circle detection feature into our code. Through this, we plan for our app's camera to mask circles within range, so that all objects outside the circle are not shown. By doing this, we can focus the app's camera on a smaller portion of the frame, making it easier to eventually recognize numbers and letters within the circles.


As of right now, the application has detected the circle and has found the circle's center. Although, it is very glitchy. The circle's outline and center will not remain steady. 


This upcoming week we will be focusing on our area of study/progress presentation. 



Target Items for the Upcoming Week:

While we need to advance our project, this week's focus is our area of study/progress presentation. Through this presentation, we plan to inform our classmates on the circle detection feature in Xcode titled "Hough Transform." We will additionally be discussing how we used the feature in our code and how it is beneficial to our application. 


Equipments Required:

-Mac Laptops

-Mr. Lin's iPad & Chord

-Large circles to verify the app recognizes what we want it to recognize.

-Presentation practice with the eno board.