However, I eventually stopped working on it, mainly because the original Muse headband was not very comfortable (or stable) to wear for extended periods (eg. overnight).
Fast forward to the start of this year, and Muse has released a new device which is much more suitable for overnight use: the Muse S. (Commercial links aren't allowed, but you can find info on it by searching for "Muse S".)
Excited by this new version, I purchased one, and have been working for a couple weeks on producing a system which can reliably detect REM-sleep from the device's EEG data.
Here is my roadmap:
1) I spent a good week or so researching the field of machine learning. This took longer than I expected, because there are tons of different machine-learning algorithms and approaches out there -- and having no experience in the field, it took a long time to find the appropriate ones to use for this use-case.
2) I then spent a couple days recording raw EEG data of sessions with different eye-movement amounts (none, micro, and macro), and storing that data with those labels, for training the machine-learning model.
3) I trained the machine-learning model, fiddling with its parameters until I seemed to be getting reliable matching between the model's predictions (for each raw eeg-data "session") and the sessions' labels/"targets". (with splitting between training and testing data, to mitigate over-fitting)
4) I then spent some time getting the (outdated) Muse SDK to work with their latest Muse S device.
5) To-do: Finish hooking up the live-streaming EEG data with the machine-learning model.
6) To-do: Experiment with when and how to use the detected eye-movement to trigger the various alarms in my app (sound, wifi light, and bed-shaker alarm).
7) To-do: Once I get things working nicely for me, clean up the app's user-interface, and get it all packaged and released on the Play store so others can also try it.
So far, I've only been posting about my work on the French lucid dreaming forum "Attrape Songes", due to their having a more active geek/"hacker" subforum atm: Le coin des Geeks
(early testing: [Test] SmartLucider banner)
(mirror of this thread: here)
However, today I've decided to start posting here on DreamViews as well, since it's getting close to the point where I'll be using it nightly to try to induce lucid dreams.
Speaking of which, what is the general approach I'm going to try for lucid-dream inductions? Basically, my idea is to have eye-movements (for longer than a few seconds) gradually increase the intensity of various alarms. Eventually, the alarms will become intense enough to wake me (especially the bed-shaker alarm -- that one works quite well). However, the alarms are set to instantly stop as soon as the eye-movements stop.
So the idea is that as soon as I awake, I'll stop moving my eyes -- causing instant alarm shutoff. And because I just noticed the alarms be disabled, I can know for sure that I had been dreaming just seconds earlier -- and at the start of REM -- so I'm then in the prime time to perform DEILD and/or WILD attempts. Because the turning off of the alarm was instant and automatic, the hope is that I'll still be close enough to the boundary of sleep, to relatively-easily slip back into the dream, with my awareness intact.
I've heard of some people who have had success with this sort of approach, but I don't yet know how effective it is in general. But hopefully by using a commercially-available headset like this, we can have multiple people attempt it -- without having to rely on guesswork as to when the REM segments are occurring, as for some timer-based approaches in the past. (however, I wouldn't recommend buying the Muse S until I've done substantial testing with this approach myself -- as the device is fairly expensive and I don't want you wasting money until I can see if (and how much) it actually helps )
As this gives a good summary of the project, I'll end this first post here -- filling in more details in the responses to come.
Wish me luck!
-Venryx
P.S. By the way, shout-out to Sebastii for his email to me a few weeks ago! I wouldn't have started this project reboot (this early anyway) if he hadn't messaged me, prompting me to take a closer look at the Muse S, and reigniting my interest in the subject.
Your planned DEILD approach with "auto off alarm" sounds promising.
Does anyone know if LaBerge's REM detection mask also had a way to turn off the light flashes using a pre-arranged set of eyes movements you could do while still asleep and dreaming?
I've got step 5 completed, but the annoying "no edit past 24-hours" rule on DreamViews is preventing me from updating my original post.
Anyway, I've got the live EEG data from the Muse headband streaming to my new app, which feeds it into the machine-learning model to pattern-match to the various eye-movement states (none, micro, macro).
However, doing so I found that, while the ML model correctly detects between "none" and "macro", it does not currently do well for detecting "micro" eye movements.
Here's a screenshot:
The UI's pretty rough atm, but basically the white and gray lines are showing the raw EEG values, and the red, green, and blue are showing "how far the pattern is from matching" for each of the three eye-movement states.
So where the red line is at 0, it means the model thinks the red "eye-movement: none" state is most likely, and where the blue line is at 0, it thinks the blue "eye-movement: macro" state is most likely.
Though note:
1) The eye-movement predicted/estimated states are delayed by a couple seconds due to the 3-second "window size" that the ML model/classifier uses.
2) The graph only shows 1/10th of all EEG samples. The raw sample rate is 256 samples per second (per channel), but the UI only shows 25 (10%) of that. This is to keep performance acceptable for the visualization, and does not impact the detail level of data sent into the machine-learning model, since that runs on the Java side. (not sure if the bottleneck is in the rendering code or the Java->Javascript transmission; that said, the graphed detail is plenty enough for my current purposes, so I'm not gonna worry about it right now)
Anyway, moving forward, this means that I need to do additional work to improve the machine-learning model. What's the main way you improve ML models? By feeding it a lot more data, in a wider variety of states.
But how do you efficiently collect tons of EEG data in various states? Well, long-term I would like to enable users to fully train their own models, for cases where the built-in model (trained on me) does not translate well to their particular body or environment.
So, I'm going to build a streamlined system by which you can just set the "labels" for a session within the Android app, then use the "volume up" and "volume down" side-keys to start and stop recording of EEG segments. These segments will then be uploaded to the database, which can then be sent to the machine-learning system to create a new model specifically for your own recorded data. (at first this'll require manual downloading and running of the Python machine-learning scripts, but it would be nice to eventually automate this in the cloud or something)
Anyway, this will take a fair amount of work to set up and streamline, but once it's done I think it'll enable me in the future to "move quickly" whenever I want to improve or change the ML models, as well as making it possible for end-users to create personalized models that work best for their setup. And having segments started and stopped on one's phone, with simple physical-volume-key presses, enables users to record lots of segments without having to be near a particular computer (the one with the machine-learning scripts), and without having to even open their eyes (since you don't need to use the touchscreen).
Anyway, next steps:
1) Set up the UI and volume-key system for the "Training" page in the app.
2) Add code to transmit those segments to the database.
3) Add code to the desktop version of the app to transfer those segments into the machine-learning scripts. (to then be re-executed manually for now)
4) Add more data, and tweak the model, until it can reliably detect the "micro eye movement" state. (it already does well for the current recorded data, but apparently that initial data was not varied enough)
I'd guess a few months from now, as I suspect it will take time for me to experiment with (and fine-tune) till I find an approach that works well.
By the way, sorry about the lack of updates. There were some distractions a week ago, and then the last few days, my motivation has been sapped due to a hardware issue with my Muse S (the wire to one of the sensors appears to have had a defect which has led to the sensor's signal greatly degrading; I'm in the process of attempting a warranty replacement).
I'm still working on the app a bit, but until I get a fixed Muse S back, my work goes slowly. (it's hard to get oneself to work on the app if you can't actually use it at night)
My warranty/RMA request was accepted, and now I'm just waiting for the replacement.
In the meantime, I've been working a bit on getting my infrared camera hooked up to my program for motion-detection recording, so that once I resume my Muse S experiment, I'll have the video data from each night to compare the data to. (for more efficient optimization and debugging)
The motion-detection recording system is not quite complete yet, but I've gotten the camera-connection code working, the rendering of the data into the user-interface, and the processing to determine the pixel-delta/motion-level in each frame.
Screenshot: (white line showing the pixel-delta/motion-level)
Still waiting for my Muse S warranty replacement, but I've gotten the first version of my motion-triggered recording system coded. Haven't yet used it for an actual night's sleep, so tonight will be my first attempt.
It's worth noting that I'll probably hit some sort of issue during the night, since I haven't yet confirmed that the code is stable (eg. free of memory leaks) when running for hours, and because I've yet to put in place any sort of system for detecting and correcting disconnects between the ip-camera and my laptop.
That said, it's still fun just to know a motion-recorder system is in place, and ready for my nightly testing, bug-fixing, and improving!
The below is a screenshot of the UI for the first iteration: (well, half of it -- the camera ip, resolution, framerate, etc. settings are in another panel)
As I thought might happen, the program did indeed crash last night -- I'm assuming due to a memory leak somewhere. I've since cleaned up some areas of the code, and will give it another shot tonight.
My motion-recording system survives through the night now. It uses a lot of memory, but I've tamed the memory growth enough that it doesn't crash now at least. ^_^
Note that there were actually three separate memory leaks in the program. Two of them I've fixed, but the last cannot really be fixed without ditching the browser api's used for video recording. (I'll probably do this eventually anyway, but I've sunk enough time into it, I'm content with it just working for now)
Also, all three of the memory leaks were due to flaws in Chrome/Electron! More precisely, all three of the memory leaks were due to the building up of memory usage for objects that I removed all references (and functional connections) to (which means they should be getting garbage-collected). The objects involved were either never getting garbage-collected (eg. MediaRecorder instances), or were requiring a "shutdown" call beyond just removing all references (eg. MediaStream.tracks[*].stop()). So basically, the memory leaks were in the lower-level browser api's, but I managed to find some ways to "work round" the leaks enough that at least it no longer crashes.
In other news, I've grown very impatient waiting for my Muse S replacement, so I've sent another email asking for the tracking number on the return. It has been two weeks now since my package to them was received, and not a peep since. :/ (I'm hoping it is just due to coronavirus under-staffing, or a mistake or something, rather than general sluggishness of their return service)
Anyway, their email responses earlier were pretty quick, so hopefully I'll have an answer on Monday.
Indeed, they responded to my follow-up email quickly.
Hi Stephen,
Thank you for sending your old Muse back. Our sincere apologies for the delay on your replacement Muse S.
We have created a shipment for your replacement Muse S. You should receive an automated confirmation email providing the order details, followed by another email with tracking information as soon as your replacement Muse ships from our warehouse. I have ensured this will be shipped using a FedEx 2-day shipment method.
I dragged my feet for a couple weeks, but finally resumed work on the eeg-based rem-detection system a few days ago.
I'm pleased to report that my replacement Muse S is functioning well. Its signal hasn't degraded, and appears to be crisper than my first was even at the start.
For the signal-processing code, I've gotten it to reliably detect eye movements now, without false positives (at least none in my testing so far, when fully at rest). I did these tests with my eyes both open and closed. Though naturally, I still don't know for sure if the sensitivity is high enough till I actually try the system overnight -- since it's possible my at-night eye-movements are weaker than those of my daytime tests (even though I tried to mimic the "weak" eye-movements one would expect from simple object tracking).
Though it's worth noting that my current signal-processing code is totally different than the one I had been using earlier. Specifically, I am no longer using machine-learning, but rather a custom-built eye-movement detection function.
Why did I make this change? Well:
1) Because of the RMA delay, my motivation had gone down quite a bit. So it was getting tiring completing the machine-learning infrastructure between my desktop and laptop. (It was designed to let you collect data anywhere, send it through the database to a "machine learning" computer, then have it process it and upload the resulting machine-learning model for any client to use. Would be nice to have, but also would take quite a while to build, so made the non-machine-learning approach more appealing.)
2) I wanted the rem-detection algorithm to be very configurable, based on people's particular physical state, as well as preferences. Because machine-learning is a "black box" to some extent (at least for someone of my experience level), the configurability I was able to achieve was a lot less than I was wanting. My new approach, because it's custom coded, lets the user customize every variable along the way, allowing for maximum "hackability" so that individuals can get the best results (and for more fun!).
Anyway, here is a screenshot of the configuration page for the new rem-detection system:
And here is a screenshot of the live data from the Muse S (and the processor), for the eye-movements [slight left, slight right, large left, large right, recentering]:
I plan to test out the system tonight, with cameras enabled (for debugging); will be the first night (I've only done some daytime tests of it till now, which have worked fine so far).
Good news:
1) There were no errors or crashes, and the Muse S battery lasted throughout the night.
2) The eeg processor did indeed seem to pick up all my eye movements! More specifically, I watched the video recordings from the night, and found many cases of eye-movements in the video which were matched by eeg-detections (and no definite misses, from the portions I checked).
Bad news:
1) Either I set the eeg detection to be a bit too sensitive, or I move my eyes in non-rem sleep a lot as well, as there were a considerable number of detections in-between the regular 90-minute sleep cycle. I began looking into this more, but found it inefficient because...
2) My program does not currently record the raw EEG samples, only the eeg movement detection times.
Because of #2, it was hard for me to see exactly what was going on, to then be able to fine-tune the settings. Because of that, I realized the next thing I need to do is add recording of all raw EEG samples, for full-detailed analysis each morning (at least while optimizing things).
Thus, I set to work today on improving the session-data storage procedure (it was basically only storing the video-recording files before today), and adding the recording of eeg samples -- using "append mode", so that if the process ever crashes mid-session (due to bugs, Windows restarting, etc.), the data will remain.
Anyway, I just got that done, so now tonight I should be able to obtain the data I need to properly fine-tune the system in the future.
However, I only managed to finish adding the eeg-recording today. I don't yet have the user-interface created for efficiently going back and analyzing that recorded data. Thus, tonight will just be for recording the raw data, followed by another day or two of work where I build up the user-interface to then look through that data.
So it may still be a while before I, eg. have a screen capture showing me having real in-dream eye-movements, with the camera feed, the raw eeg samples, and my program's analysis data displayed on top of that...
Will be fun (and informative) once the program gets to that point, though. (should be just a few days, as I work out the UI)
With some tweaking, my next sleep session had much crisper rem-period detections -- enough that it correctly identified both of the dream periods (which I marked with a hand gesture after awakening from them), without any false positives.
Granted, the sleep session was not a full night (it was only about three hours), but I'm still very pleased with the results, and am confident in its long-term usability.
Below is the raw log of all "eye movement detection" moments during my sleep, where the "eeg activity" (ie. clustering of eye-movement detections around that point) is above the minimum to be considered a period of dreaming (5). (the "peak" value shows how much movement there is within that particular eye-movement 4s detection period)
==========
12:52:15 @eegActivity:5 @peak:100
12:52:19 @eegActivity:6 @peak:100
12:52:23 @eegActivity:7 @peak:75
12:52:53 @eegActivity:6 @peak:39
15:56:36 @eegActivity:5 @peak:97
15:56:41 @eegActivity:6 @peak:75
15:56:45 @eegActivity:7 @peak:21
15:56:49 @eegActivity:8 @peak:77
15:56:53 @eegActivity:8 @peak:67
15:56:57 @eegActivity:9 @peak:69
15:57:02 @eegActivity:10 @peak:82
15:57:06 @eegActivity:11 @peak:76
15:57:10 @eegActivity:11 @peak:100
15:57:15 @eegActivity:12 @peak:37
15:57:20 @eegActivity:13 @peak:68
==========
The two lines where it shows " << ..." after it are the markers of the two times where I woke up from a dream, and did a hand motion that was recorded by the camera.
This matches up well with the two sequences of eye-movements, from during the two dreams. (followed by a short while where I shift around)
(Oh, and the two sections at the start and end are of course from moving around at session start and end.)
Anyway, I'm very pleased. I'll tweak it a bit more, but then I move onto phase 2, which is finding the appropriate prompts which are able to wake me -- just enough that I awake, and then stop moving my eyes, which will deactivate the prompts and allow me to attempt a dream-reentry. I already have light and shake prompts set up (which activated fine), but I need to fine-tune the activation points, as well as train myself to be able to wake from them without breaking my dream-like immersion too much.
Regarding display of nights' data, the program will soon have nice graphs and such showing the periods of rem, markers for awakening, etc. The text-log format above is just being used until I finish the user-interface + image/video capturing for that.
While I haven't gotten a lucid dream from the system yet, I did have two vivid dreams in the above session (as signaled at the two marked moments). And more importantly, the rem-detection system seems to finally be functioning, which is great to finally have available.
Onto the prompt-configuring, and mindset-training phase now!
The results again appear realistic/accurate with regard to the sleep cycle -- approximately 90 minutes between each sequence of eye events, with plausible rem durations. (excluding the first and last segments, which were due to motion while situating for bed, and getting up in the morning)
Arguably, the rem-detector only missed the first rem session (somewhere between 3:09 and 5:48). I just checked the full log (eeg detections which didn't reach the "eeg activity/clustering >= 5" threshold) and the camera feed, and it appears there was a very weak rem session around 4:17 (max eeg-activity of 2). But I don't consider this an issue, since the eye-movements were so minor for that "weak rem" segment that triggering prompts then would likely be counterproductive anyway.
Another good point is that, from what I can tell, it didn't have a single (meaningful) false-positive! The detections at 8:01 and 8:19 might look like ones at first, but checking the video recording, those are just from my repositioning in bed -- which will soon be filtered out when I process the accelerometer data from the Muse S (to detect when the whole body moves, which will cause eeg changes during that period to be ignored).
Anyway, still haven't obtained a lucid from the procedure yet, but the above further increases my confidence in the core rem-detection process working well. (and I have had some interesting dreams relating to the prompts, such as a moment in a dream where I was focused on the sensation of something in my hand -- likely caused by the bed-shaker device that I was holding physically starting to trigger)
Technically, I actually had two -- but the first was only a few seconds long, as the bed-shaker prompt woke me up seconds after triggering lucidity. XD
After that disappointing first segment (which lasted only ~12 seconds, as supported by the log), I tried to fall back asleep, as I figured my brain still had rem-sleep queued up for that cycle (which turned out to be true).
Thankfully, that dream-reentry attempt succeeded: twenty minutes later, I had re-entered a dream, and I soon regained awareness/lucidity.
The below is the EEG log from the night, with markers added for the key events:
Unfortunately, the ip-camera recorder had a malfunction this session, so I wasn't able to 100% confirm the event times. That said, I'm pretty confident I pinned them correctly, based on the times and event durations I remember [eg. I know I woke at ~6:30 to check the time, prior to the lucids]. Regarding the malfunction, it seems to have been caused by the 30s camera-reconnect-time being a bit too fast this time for my ip-camera; I've since changed it to 65 seconds, which should prevent the problem in the future.
Anyway, this is my recollection of the lucid segments (written prior to checking the EEG logs and [frozen] camera recordings):
Spoiler for Dream journal entry:
First segment
==========
felt vibrating alarm; realized dreaming; tried to get up out of bed; bed-shaker intensity increased; woke me up, lying in bed again; confirmed awake; disappointed, but tried to go back to sleep
Second segment
==========
realized I was in-dream again; immediately went for the window; window hard to open, so went through house/apartment to front door; manage to get outside
see CW returning from somewhere; fly up; can't break through roof of building, as just increases with my height; go out glass doors; fly off to place in distance; it's some kind of large building, with parking lots nearby; I mess around with the people in the cars, flying around, with the pine trees surrounding
becomes evening; show in nearby building (theater) is starting; parents are there, and a friend (CL); I sit with friend, with old show on (it was kinda like Tom and Jerry, or whatever that cat and mouse one was); it's funny, but not as much as regular lucid dream content; I prompt friend to leave with me; he kinda agrees, though it gets confusing, and he ends up disrupting the show; we leave area, and I fly with friend around a bit; we talk about dream control, and I notice a storm brewing up as we get closer to an area, with about 5 huge tornadoes; I try practicing my dream control by trying to break up the growing tornado set, even while looking closely at them; I greatly reduce the intensity and speed, while visualizing it dispersing and/or reversing for short periods, but can't manage to fully delete it (it springs back after the focusing); I give up, and just leave the area, returning to theater
we see show has ended now, and most have left to upstairs room for "after party"; I follow, and find a group of people in small room; I see another friend there (DY); I talk with them a bit, finding it funny to see them so clearly in fake world; one of the characters seems more "in the know" than the others, seeming to understand my status as a lucid dreamer; they respond quizzingly; we end up moving to another, large room to ourselves as we talk, with the main contents being that below (one other part was that she kinda took on the identity of Cynthia, talking about her daughter a bit I believe):
Final portion (description taken from private-message)
==========
Man, there's something uniquely satisfying about talking with a dream character, which you know as such, as they argue about the afterlife, psi, etc. It's absolutely hilarious, and again, very satisfying for some reason.
Like, she was making an "argument" about how heaven would be a dystopia anyway if we had to live there forever. Because, she said, imagine if someone sent an enderman to heaven which killed everyone, then everyone would respawn, then it killed them again -- wouldn't that be worse than just dying? But I responded: that is a hypothetical which is incompatible with the definition/promise of heaven. That is, heaven is defined/promised as a place which is free from the reach of regular life chaos and negative forces. That is, it would be under God's sovereign control, so such hypotheticals would never come up.
She then acted like I was naive for believing in a spiritual side to the self. I found that hilarious coming from a dream character. I just kind of laughed at the situation, before resetting and beginning a serious response -- though woke up before I could fully go into it. (I was going to bring up the various psi experiments, and ask her how she makes sense of them in a materialistic universe, in a parsimonious and sensible-by-evolution [eg. species' not evolving to rampantly exploit it] way -- but only got as far as bringing up psi in general before the dream started fading)
Anyway, I had a lot of fun in that 11 minute segment, and am glad the prompts did not wake me during that second segment. In the future, I will attempt to resolve the "unwanted prompt wakeup" issue by having a specific eye-movement sequence one can perform, which will disable the prompts for a few minutes. (perhaps the motion of going cross-eyed! -- as this one is easy for EEG to detect, very fast to perform, and doesn't overlap with regular eye movements)
It's only one/two lucids so far, but the solidity of the rem-detector makes me confident that I'll be able to continue building on this base, to eventually form a full-featured "lucid dreaming assistant" which will help keep people on track for persistent learning. In the past, I've had trouble with my motivation dropping over the weeks/months, as my induction rate dropped off, leading to my ceasing induction efforts. That induction-rate drop-off may still occur, but at least now I have a solid rem-detector available to work with, and detailed recordings both from the camera and EEG headband, providing the nightly data I crave for to keep my motivation up. (that is, even failed nights are now interesting to go over, since they have a full record of my dream-states/eye-movements throughout the night)
Had my second lucid-dream from the procedure. And more importantly, I finally got around to building a basic UI for reviewing past sessions!
Here is the panel where you select which session to review: (the DJ and REM columns aren't coded/populated yet)
And here is the area where you can review the details from a session:
The video at the left is selected by using the chart at the bottom, clicking one of the "cam recording" areas. It has a playback speed setting for rapid replay of long segments.
The chart at the bottom is the more interesting part, though. It shows an overview of:
1) Eye motions, in the white eeg activity-over-time line.
2) Head motions, in blue (for ignoring eye-movements when awake).
3) Camera-recording periods, in orange.
4) The red and green lines show the "eeg-activity activation threshold" for the light (red) and bed-shaker (green) prompts.
I'm very pleased that the night I finally had a long sleep session (8 hours), also resulted in a second lucid dream from the procedure! (in the last rem segment -- a while after the wakeup at 23:45)
As far as lucid dreams adventuring goes, it was actually below average, but I'm glad I was able to try out the "feel for the real-world prompts you know are occurring right now" objective I'd set earlier:
Spoiler for Journal entry:
Do rc; realize dreaming; vibrates; reset to bed (in old house)
Try getting up; vibrates; reset; happens a couple more times, each time getting slightly further; lock eyes forward; roll out of bed; make it to stairs; feel one more vibration, but start engaging with objects, which keeps me grounded; it's night
Observe family; mom had recently come back from shopping; DW and dad in family room; walk to dining; have fun by flipping table on side and calling new style (mom ignored); walk outside
As walking on street, i look for light and feel for vibration from real-world prompts, when i move my eyes; no longer sense them, tho notice dream destabilization occurring; stop, to keep dreaming
We come across rowdy group near night club; we anger them somehow; i run and fly with others; we lose them, and make to playground far off near lake; one of them finds us, and starts climbing the structure (a few feet beneath me); I roll my eyes (knowing dream), and say...
Me: "Okay, pause, pause."; he obeys.
Me: "So how do you explain your knowing we were here? That seems unrealistic."
Him: "I just searched", he says.
Me: "In this huge city? We have a huge lead, yet somehow you find us right away? On your very first try?"
Him: "Well... it wasn't my first try. It was my second!"
Me: "Really? Then where was the first place you searched?"
He then gestured vaguely behind me. I then turn, and try to visualize what location that would be. Dream world is new of course, so I had no memories; my sister CW "reminded" me that that's in the direction of some previous experience, near some large, fancy houses near a long strip of flat land (visual details from new visual scene forming). It's interesting seeing them, but I don't want them to destabilize my dream too much.
[forgotten section here]
A while later, I find myself (and my family) at a party at a public park. They are playing games like volleyball, where they're keeping score, and using that to determine the prizes everyone will get. I look at the scoreboard, and it seems to have been manipulated; some numbers seem artificially made equal, and they didn't update the scores properly from last couple of scores. This is reinforced by comments by others. One lady on scoreboard, confirmed this as she walked up, with a certain movement that left her in exactly the same pose as shown in the scoreboard picture, which I found funny.
I re-remember that I'm dreaming, and that I have my eye-tracker device on (with prompts). I decide to try seeing if I can feel the prompts again. I start moving my eyes left and right, which I know the eye-tracker must be picking up on. I try to feel the vibration in my hand that I know is occurring in the real world. I notice some vague flashes in my visual field. I don't feel full-fledged vibration in my hand, but I feel a few minor sensations (not sure if from actual bed-shaker or just dream emulation). I do this for a few more seconds, before the dream finally destabilizes more, and I find myself in the real world (confirmed about a minute later).
Anyway, glad to be back! I plan to be using the system most nights now. (However, due to my unorthodox sleep situation, I often sleep in small segments throughout the day/night. I will usually not post them since these short sleep sessions, I found tend to have "muted"/muddled eeg/dream activity. A full night's sleep appears to increase the intensity of the late-night dreaming sessions, increasing the frequency and intensity of the prompts, as well as forming a meaningful chunk of data that is worth sharing. So, whether I share my next sleep, for example, depends on if it ends up lasting long enough to be worth doing so. I'll try to align my sleep to times where a full night can occur -- though realistically, I often have trouble maintaining that long-term.)
My next night was only a partial one, but I got an interesting result. I went to bed in the evening, and had a long and vivid dream. I remembered quite a lot from it, and even had some "semi-lucidity". (where I realized the rules were somehow different/more-bendable than normal, but didn't specifically identify it as being a dream)
However, I decided to stay in bed and fall back asleep. A awoke a few hours later, but was disappointed because I didn't remember any more dreams (at least beyond a few weak fragments). This has happened many times in the past, and I've generally assumed it just meant that I wasn't paying enough attention, so didn't recognize when the later dreams had occurred.
As suggested by my sleep chart below however, this might not always be the case:
Rather than me just "not recognizing" a standard set of dreams, I really just did not have any quality rem segments after that initial burst! That is, it's no wonder I didn't remember much from the later dreams, because the later dreams (assuming they occurred) were much weaker than the initial one. So the lack of further dream memories (after an awakening) that I've hit so often in the past, may sometimes just be due to unusual sleep cycles and the like affecting rem-sleep quality, as opposed to flaws in my mental preparation. (I'm sure there's some degree of that as well; nonetheless, it's encouraging)
It's worth noting that after the section above, I did fall back asleep. I didn't expect to, so I didn't put my headset back on, so unfortunately I didn't get the chart for it. I do remember the dreams were vivid again in that next section, so the dampened-REM effect seen above seems to have ended by that time. In the future, I plan to keep making the session-starting process easier so I miss fewer of these accidental-sleep segments. (for example, having the audio status reports let me know if the camera successfully connected, as opposed to the current manual confirmation process, as well as an option that automatically starts a session when it detects that you've turned the Muse S device on)
Note that, like the previous session, there is one cycle at the start with a lot of rem/eye-movements, but not much in the cycles after that. (and this matches with my memory, of a semi-lucid dream [noticing malleability, but not as dream specifically] at the start of the night, followed by "not much" after that)
I believe the cause of this (both this session and the previous) is that when I went to bed, I was pretty tired/sleep-deprived. So it seems that, when I'm sleep-deprived, the "rem rebound" that occurs is focused at the start of sleep (at least the last few times). This is interesting, as I had assumed previously that rem-rebound would merely increase the rem intensities a bit throughout the night. Instead, it seems to cause a "super rem cycle" at the start, followed by a series of rem segments that are actually less intense than normal. Of course, I can't confirm this from just a couple nights, but we'll see if the pattern holds up.
It's so cool to be learning things like the above, by examining the raw data each night. (I think it would be harder to do this with sleep categorizers that only say what stage of sleep, rather than showing the raw eye-movements chart -- as this lets me see differences in intensity, not just duration. I did not realize previously that the intensity of rem varied so much from cycle to cycle.)
The brief segment at the end, was brief because I gained lucidity! And once I did, I started looking around at the beautiful scenery. Which, made the "eeg activity" counter go up, triggering the second level bed-shaker prompt, which I felt very clearly in my dream, and woke me up.
Spoiler for Journal entry:
Talking to really tall person, in what seems like a basketball court; I suddenly (over the course of a couple seconds) realize just how crazily tall they are.
Me: "Woah, you're like really, really tall."
Him: "You just realized that?"
Me: "How tall *are* you?"
Him: "10 feet"
Me: "But how come we don't know of you? Like, the world record right now is 8 foot something."
I then realize someone this tall is unrealistic, and realize its a dream. I then leave the boring scenery, jumping over wall at edge of court, finding grassy park area behind it.
The dream continues nicely, and I do some tests regarding limiting body motion, and seeing if that keeps the dream more stable. It seems to work, until I move my eyes around enough (while looking at beautiful scenery, of dirt road area with tons of trees around) that my bed-shaker alarm triggers. I was holding it over my chest apparently, so I felt the vibration on both my chest and in my hand (but mostly the former). This woke me up in a couple seconds, as confirmed with camera gesture.
I'm not really disturbed by that however, since that's what I planned/expected. For now I'm just trying to increase the rate of lucidity induction. Later, I will work on adding a system to disable the prompts with a special eye movement (most likely, looking at one's nose / going cross-eyed, as that's not a motion one normally performs). I haven't added it yet due to laziness + focusing on induction only for now. (well, induction + training to reenter a dream easily on waking up from one)
This is the longest recorded session I've had yet -- just over 11 hours! And I actually slept that long too (well other than the last half hour or so), which was made possible due to my having been sleep-deprived (only an hour and a half of sleep the previous night).
There are two things to note about this session:
1) I had a lucid dream at some point! Unfortunately, it was sometime earlier in the night -- and I appear to have lost my lucidity as the dream went on, as I don't remember the point at which I woke up from it (just some short segments where I remember knowing I was dreaming, and acting accordingly).
2) I have a good estimate of how long the Muse S battery lasts now. After 11 hours of being on, it was still running when I got up! I checked the battery, and it said it was at 5%. So the battery seems to indeed be sufficient for any regular-length sleep session.
I don't have many comments on the EEG data, other than to note that again we see the pattern of a muted REM cycle after the initial burst, and that for the future, I'm making a small change to the rem-detection config (changing the max-trigger-rate/min-interval for eeg-motions from 3s to 2s, to place more weight on short bursts of frequent movement). Don't know if this will help or hurt the overall detection scheme, but worth trying.
Bookmarks