I dragged my feet for a couple weeks, but finally resumed work on the eeg-based rem-detection system a few days ago.
I'm pleased to report that my replacement Muse S is functioning well. Its signal hasn't degraded, and appears to be crisper than my first was even at the start.
For the signal-processing code, I've gotten it to reliably detect eye movements now, without false positives (at least none in my testing so far, when fully at rest). I did these tests with my eyes both open and closed. Though naturally, I still don't know for sure if the sensitivity is high enough till I actually try the system overnight -- since it's possible my at-night eye-movements are weaker than those of my daytime tests (even though I tried to mimic the "weak" eye-movements one would expect from simple object tracking).
Though it's worth noting that my current signal-processing code is totally different than the one I had been using earlier. Specifically, I am no longer using machine-learning, but rather a custom-built eye-movement detection function.
Why did I make this change? Well:
1) Because of the RMA delay, my motivation had gone down quite a bit. So it was getting tiring completing the machine-learning infrastructure between my desktop and laptop. (It was designed to let you collect data anywhere, send it through the database to a "machine learning" computer, then have it process it and upload the resulting machine-learning model for any client to use. Would be nice to have, but also would take quite a while to build, so made the non-machine-learning approach more appealing.)
2) I wanted the rem-detection algorithm to be very configurable, based on people's particular physical state, as well as preferences. Because machine-learning is a "black box" to some extent (at least for someone of my experience level), the configurability I was able to achieve was a lot less than I was wanting. My new approach, because it's custom coded, lets the user customize every variable along the way, allowing for maximum "hackability" so that individuals can get the best results (and for more fun!).
Anyway, here is a screenshot of the configuration page for the new rem-detection system:

And here is a screenshot of the live data from the Muse S (and the processor), for the eye-movements [slight left, slight right, large left, large right, recentering]:

I plan to test out the system tonight, with cameras enabled (for debugging); will be the first night (I've only done some daytime tests of it till now, which have worked fine so far).
Wish me luck!
-Venryx
|
|
Bookmarks