POST MORTEM - APRIL 12TH, 2023
For the final entry in this blog I'm just going to drop the formatting and type my mind. Of course, as you know if you read all the entries, my group wasn't the most helpful to me. I was reflecting on it as I was writing the peer feedback and I honestly do not expect much. I'm more than happy to carry a majority of the weight in a group project and enjoy project managing and having a hand in everything. That being said, I think my team members leaned on me a little too much this semester. I spent the most time on the project by far, and while I was working my way through the semester I was always asking myself... "am I stepping on toes?", I know that sometimes when I'm in the opposite situation and there's someone doing a lot of work I feel like I'm falling behind. I'm conscious of this and multiple times throughout the semester (as aided by my busy work schedule) I took a week and contributed far less to the project just to see if anyone would pick up the slack... and they didn't. At that point, I shifted my gears and got ready to make this project manageable to do on my own. This is the main reason why we had to scale our scope back and lose the future museum exhibits. This is also the reason that our project is somewhat unstable and crashes on lower-end devices. We had much too high resolution assets from the start and despite my cautioning we kept them. A little teamwork would have gone a long way here!
Anyway, despite all that, I had a lot of fun in this class! I particularly enjoyed going back to web development after a long hiatus. It was also very interesting to throw my passion for XR technologies in the mix and gain some insight into whats possible on mobile VR platforms -- and web technology for that matter. I learned a lot more about how JavaScript functions on a higher level, learned some interesting system administration stuff on the side and made a pretty decent WebXR experience (when played on Desktop).
Anyway, despite all that, I had a lot of fun in this class! I particularly enjoyed going back to web development after a long hiatus. It was also very interesting to throw my passion for XR technologies in the mix and gain some insight into whats possible on mobile VR platforms -- and web technology for that matter. I learned a lot more about how JavaScript functions on a higher level, learned some interesting system administration stuff on the side and made a pretty decent WebXR experience (when played on Desktop).
And for the media reflection, of course, the demo video:
ENTRY 09 - APRIL 4TH, 2023
Overview
If this week had a title, it would be "Adding Polish", also one of my favourite git commit messages... Anyway, this week we ironed out a majority of the functional bugs preventing the experience from feeling smooth. This includes things like images not generating in the past museum and glitchy navmesh scripts. These are all fixed! However, we are a little behind in adding content to the museum and my team remains absent. I need help generating all of the content to place in the museum itself. Luckily Jelena is helping me out with the research and exhibit development.
If this week had a title, it would be "Adding Polish", also one of my favourite git commit messages... Anyway, this week we ironed out a majority of the functional bugs preventing the experience from feeling smooth. This includes things like images not generating in the past museum and glitchy navmesh scripts. These are all fixed! However, we are a little behind in adding content to the museum and my team remains absent. I need help generating all of the content to place in the museum itself. Luckily Jelena is helping me out with the research and exhibit development.
Challenges and Successes
I had a lot of problems trying to debug our navmeshes because we wanted to use them to keep the player contained regardless of which museum they are in. This, in the end, is achieved by removing and respawning the correct navmesh and reassigning all of it's attributes each time the player teleports across a certain x-coordinate boundary. The result is a clean smooth navmesh experience that makes it feel like the model has collision! Awesome. To be added still to the navmesh experience, though, is the capability for the mesh to keep the players inside (or keep them out of) the teleporting apparatus. I also took it upon myself to add a bit of polish to the project this week. Namely, I added some spheres with spherical textures on them (sky maps) to prevent the players from looking right through the windows to the other museums (supposedly in another time). They now see a sunset out the windows of the museums and I decided to tweak the environment component a little as well so that the lighting matches. I also added circles-sphere-env-maps to some objects in the scene. The main challenges here were figuring out how to place the spheres so that they don't interfere with one another or the geometry of the museums. I also added some labels and circles-artefacts! I like using the circles components!
Hours Worked
- 12 hours programming, updating to new circles versions
- 2 hours on the ol' Trello board and in Discord updating unresponsive group members
- 2 hours research, brainstorming
- Total: 16 hours
I had a lot of problems trying to debug our navmeshes because we wanted to use them to keep the player contained regardless of which museum they are in. This, in the end, is achieved by removing and respawning the correct navmesh and reassigning all of it's attributes each time the player teleports across a certain x-coordinate boundary. The result is a clean smooth navmesh experience that makes it feel like the model has collision! Awesome. To be added still to the navmesh experience, though, is the capability for the mesh to keep the players inside (or keep them out of) the teleporting apparatus. I also took it upon myself to add a bit of polish to the project this week. Namely, I added some spheres with spherical textures on them (sky maps) to prevent the players from looking right through the windows to the other museums (supposedly in another time). They now see a sunset out the windows of the museums and I decided to tweak the environment component a little as well so that the lighting matches. I also added circles-sphere-env-maps to some objects in the scene. The main challenges here were figuring out how to place the spheres so that they don't interfere with one another or the geometry of the museums. I also added some labels and circles-artefacts! I like using the circles components!
Hours Worked
- 12 hours programming, updating to new circles versions
- 2 hours on the ol' Trello board and in Discord updating unresponsive group members
- 2 hours research, brainstorming
- Total: 16 hours
Finally, for my multimedia reflection, a screenshot representing the new polished state of the project!

ENTRY 08 - MARCH 28TH, 2023
Overview
This week was a great week for development. We've had some major changes to the project and some really good foundation building that has allowed for a couple of bugfixes but also created new bugs. Firstly, I overhauled our networking implementation, moving it from the "teleport.js" script into a new script called "networkManager.js" which will facilitate connection to the circles socket and then handle all of our variables to be synchronized 'server-side'. Alongside the networkManager.js script, I also wrote a networkCube.js script which implements a simple color changing cube in the past museum that synchronizes across network. I used this during my debugging phase while implementing the networking and it will probably be removed in the final product unless we can find a use for the general structure of the code. After the networkManager was up and running, I then set my sights on fixing all of our network syncing issues. After some solid programming hours I ended up with the entire teleport sequence being synchronized across all the clients. This was a big relief for me as I feared it might become a big problem... :) To take a break from networking, I added a couple of components to the world and started work on the navmeshes. Those are now implemented but not fully functional so far.
Challenges and Successes
Some of my biggest challenges include parsing the differences between client and server in this scenario and working out the socket js logic through a few layers of programming libraries... I also will note that we are lacking objects in our museums still and I haven't had much communication again from my team this week. We are really going to have to crunch to get all of the artefacts and components of the museum put together in the end. Finally, I can't seem to get my new 'navMeshHandler.js' component to actually modify the 'gltf-model' attribute, which means I can't dynamically swap out the navmeshes. I also worked on implementing an animation for loading and the bug with the loading screen still remains, I'll have to fix that some time soon. I could really use a couple of other developers working on this. I have tasked them with bugs to fix and everything, I'm still lacking participation and basically doing this whole project myself with little bits of help.
Hours Worked
- 16 hours of programming, bugfixing
- 2 hours updating server, updating circles to new release
- 2 hours working on the Trello board
- Total: 20 hours
For my media reflection this week, I'd like to share this addon for VS Code. If you aren't aware of it yet, you should be using it. It allows you to preview gltf files in various engines right inside of VS Code to verify that the assets are looking just as you want. It's great!
This week was a great week for development. We've had some major changes to the project and some really good foundation building that has allowed for a couple of bugfixes but also created new bugs. Firstly, I overhauled our networking implementation, moving it from the "teleport.js" script into a new script called "networkManager.js" which will facilitate connection to the circles socket and then handle all of our variables to be synchronized 'server-side'. Alongside the networkManager.js script, I also wrote a networkCube.js script which implements a simple color changing cube in the past museum that synchronizes across network. I used this during my debugging phase while implementing the networking and it will probably be removed in the final product unless we can find a use for the general structure of the code. After the networkManager was up and running, I then set my sights on fixing all of our network syncing issues. After some solid programming hours I ended up with the entire teleport sequence being synchronized across all the clients. This was a big relief for me as I feared it might become a big problem... :) To take a break from networking, I added a couple of components to the world and started work on the navmeshes. Those are now implemented but not fully functional so far.
Challenges and Successes
Some of my biggest challenges include parsing the differences between client and server in this scenario and working out the socket js logic through a few layers of programming libraries... I also will note that we are lacking objects in our museums still and I haven't had much communication again from my team this week. We are really going to have to crunch to get all of the artefacts and components of the museum put together in the end. Finally, I can't seem to get my new 'navMeshHandler.js' component to actually modify the 'gltf-model' attribute, which means I can't dynamically swap out the navmeshes. I also worked on implementing an animation for loading and the bug with the loading screen still remains, I'll have to fix that some time soon. I could really use a couple of other developers working on this. I have tasked them with bugs to fix and everything, I'm still lacking participation and basically doing this whole project myself with little bits of help.
Hours Worked
- 16 hours of programming, bugfixing
- 2 hours updating server, updating circles to new release
- 2 hours working on the Trello board
- Total: 20 hours
For my media reflection this week, I'd like to share this addon for VS Code. If you aren't aware of it yet, you should be using it. It allows you to preview gltf files in various engines right inside of VS Code to verify that the assets are looking just as you want. It's great!


Entry 07 - MARCH 21st, 2023
Overview
After the beta presentation workload last week I needed a break so I decided I'd work on some simple bug fixes Sunday, Monday and Tuesday this week. On Sunday I updated our fork to the most recent version of Circles and spent a while getting all the networking items functioning again. I then fixed the button-animation-not-showing bug and called it a day because it was frustrating developing on the train where occasionally I would go through a patch of bad connection and fail to connect to the server (not ideal when trying to bug fix networked things). Yesterday (Monday) I got some much needed work done on the Trello board and sent a large update to the group with where I am at (sick and ear infections back) and some specific tasks that should be done by Wednesday morning. I organized all of our bugs into a new column, provided descriptions for the final deadline goals and added our gantt chart to the references column for a sense of progress and where we should be on Wednesday. I figured if I'm sick and not feeling up to much it's the least I can do is provide a clear task list. Finally today (Tuesday) I got my first responses from some group members. Dylan sent in the final models for the museums Monday night as I requested with compressed textures. Abdallah started working on some bugfixes this morning and ended up resolving the keyboard-disappears-on-enter bug. I then popped in and added some polish, loaded the museum models that Dylan had sent and added a waiting animation while images are generating.
After the beta presentation workload last week I needed a break so I decided I'd work on some simple bug fixes Sunday, Monday and Tuesday this week. On Sunday I updated our fork to the most recent version of Circles and spent a while getting all the networking items functioning again. I then fixed the button-animation-not-showing bug and called it a day because it was frustrating developing on the train where occasionally I would go through a patch of bad connection and fail to connect to the server (not ideal when trying to bug fix networked things). Yesterday (Monday) I got some much needed work done on the Trello board and sent a large update to the group with where I am at (sick and ear infections back) and some specific tasks that should be done by Wednesday morning. I organized all of our bugs into a new column, provided descriptions for the final deadline goals and added our gantt chart to the references column for a sense of progress and where we should be on Wednesday. I figured if I'm sick and not feeling up to much it's the least I can do is provide a clear task list. Finally today (Tuesday) I got my first responses from some group members. Dylan sent in the final models for the museums Monday night as I requested with compressed textures. Abdallah started working on some bugfixes this morning and ended up resolving the keyboard-disappears-on-enter bug. I then popped in and added some polish, loaded the museum models that Dylan had sent and added a waiting animation while images are generating.
Challenges and Successes
As mentioned above, I had an unexpected amount of trouble when developing this project on the train. Over the weekend I went to a wedding and decided that the 6 hour train ride home would be a perfect time to sort out some bugs. I was wrong. It turns out that when you need an https connection to debug this project (ngrok or serverside), that leads to inescapable immense latency and unpredictable behaviour (which is not a good environment for bugfixing). I ended up resolving the button click bug and updating the fork nevertheless but I wish I was more productive in those 6 hours. I of course had some additional troubles this week with contacting my team members. To be fair, I didn't reach out until Sunday but it still goes without saying that the tasks on the Trello board should at least be attempted. I was always available to respond to Discord messages. I'm somewhat disappointed in the lack of work this week. Finally, while working on adding some the new museum models and loading animations today I ran into some weird conflicts in a-frame. I tried to add the loading-circle animation as a .mp4 video at first but then I couldn't set a video as a texture so I pivoted to assigning an animated .gif as the texture. It turns out you can't do that either and there is a library called 'aframe-gif-shader' that claims to allow you to but I couldn't get it to work with modern a-frame. It seems to be very out of date. So then I went back to using the video and spawning in an a-video element when the enter button is pressed and then removing it on a timer. The animations aren't perfectly timed yet but I'll include an example below.
As mentioned above, I had an unexpected amount of trouble when developing this project on the train. Over the weekend I went to a wedding and decided that the 6 hour train ride home would be a perfect time to sort out some bugs. I was wrong. It turns out that when you need an https connection to debug this project (ngrok or serverside), that leads to inescapable immense latency and unpredictable behaviour (which is not a good environment for bugfixing). I ended up resolving the button click bug and updating the fork nevertheless but I wish I was more productive in those 6 hours. I of course had some additional troubles this week with contacting my team members. To be fair, I didn't reach out until Sunday but it still goes without saying that the tasks on the Trello board should at least be attempted. I was always available to respond to Discord messages. I'm somewhat disappointed in the lack of work this week. Finally, while working on adding some the new museum models and loading animations today I ran into some weird conflicts in a-frame. I tried to add the loading-circle animation as a .mp4 video at first but then I couldn't set a video as a texture so I pivoted to assigning an animated .gif as the texture. It turns out you can't do that either and there is a library called 'aframe-gif-shader' that claims to allow you to but I couldn't get it to work with modern a-frame. It seems to be very out of date. So then I went back to using the video and spawning in an a-video element when the enter button is pressed and then removing it on a timer. The animations aren't perfectly timed yet but I'll include an example below.
Hours Worked
- 8 hours bugfixing
- 2 hours organizing Trello, updating goals and creating objectives
- Total: 10 hours :( I'd like to go for 20 next week!
For my media reflection, an example of the loading animations I mentioned above:
- 8 hours bugfixing
- 2 hours organizing Trello, updating goals and creating objectives
- Total: 10 hours :( I'd like to go for 20 next week!
For my media reflection, an example of the loading animations I mentioned above:

ENTRY 06 - MARCH 15TH, 2023
Overview
This week was a lot more productive than last. I got a ton of programming done ahead of the beta presentation. The project is really starting to take shape. We have teleporting (time travelling) between museums working, two museums imported and we moved the OpenAI API to the server side of the application. Furthermore, users can now type with the virtual keyboard to send in an AI prompt and will get back an image of what they sent. Even despite spending about 4 days bedridden due to a particularly painful ear infection, I ended up sprinting through about 20 hours of programming on Sunday, Monday and Tuesday to get our beta online in time for an (early) Wednesday presentation.
This week was a lot more productive than last. I got a ton of programming done ahead of the beta presentation. The project is really starting to take shape. We have teleporting (time travelling) between museums working, two museums imported and we moved the OpenAI API to the server side of the application. Furthermore, users can now type with the virtual keyboard to send in an AI prompt and will get back an image of what they sent. Even despite spending about 4 days bedridden due to a particularly painful ear infection, I ended up sprinting through about 20 hours of programming on Sunday, Monday and Tuesday to get our beta online in time for an (early) Wednesday presentation.
Challenges and Successes
As I worked towards the beta I ran into a number of challenges. I had Dylan write the code for teleporting between museums but he wrote the code in regular socket.io so I had to rewrite it and make it work within the circles framework. I also had Abdallah write code for the keyboard component and he got it working except for the part where it calls the image generator function. I bridged that gap and then we had our project functional, but then the big challenge came...
As I worked towards the beta I ran into a number of challenges. I had Dylan write the code for teleporting between museums but he wrote the code in regular socket.io so I had to rewrite it and make it work within the circles framework. I also had Abdallah write code for the keyboard component and he got it working except for the part where it calls the image generator function. I bridged that gap and then we had our project functional, but then the big challenge came...
I realized that when sending a request to get an image from the API, the request (& API key) are client side and so every client would have to have an OpenAI API key. I then decided that I needed the API (& key) to be on the server side. This led me down a rabbit hole of POST requests and Express.js documentation until I figured out how to make the server respond with an object from the API when requested. I wrote this code in /node_server/routes/router.js. It's just a simple post request that sends the data from client side to server side so that the server can use it's API key to grab the image. Finally, I had to implement a CORS fix by using Heroku to bypass an error on the API image link being fetched. I definitely did a majority of the work here on this beta but I am happy to have the others contributing logic and art where they can and acting as the 'glue' for this project.
Hours Worked
- 2 hours in team meetings
- 1 hour organizing tasks, liaising with teammates
- 23 hours programming (between Wednesday, Sunday, Monday and Tuesday)
- 1 hour updating the slideshow for our presentation
- Total: 29 hours
Below is a brief gif demonstration of the teleporting mechanic and view into the current state of the project (the ninja that appears is AI generated):
- 2 hours in team meetings
- 1 hour organizing tasks, liaising with teammates
- 23 hours programming (between Wednesday, Sunday, Monday and Tuesday)
- 1 hour updating the slideshow for our presentation
- Total: 29 hours
Below is a brief gif demonstration of the teleporting mechanic and view into the current state of the project (the ninja that appears is AI generated):

ENTRY 05 - MARCH 7TH, 2023
Overview
This week I didn't actually do much on the term project. I spent a lot of time working on assignment 3. Probably too much. That being said, what we did do this week... the bugs are fixed on the CirclesXR version of the AI museum. Nadia has finished modelling a chandelier for one of the museums. Jelena finished assembling research for our museum exhibits and grouped them into some proposed exhibits. In my free time I have been researching deeper into AI models and playing around with Stable Diffusion.
This week I didn't actually do much on the term project. I spent a lot of time working on assignment 3. Probably too much. That being said, what we did do this week... the bugs are fixed on the CirclesXR version of the AI museum. Nadia has finished modelling a chandelier for one of the museums. Jelena finished assembling research for our museum exhibits and grouped them into some proposed exhibits. In my free time I have been researching deeper into AI models and playing around with Stable Diffusion.
Challenges and Successes
Over the last week my main challenge was the bugs we were having with the AI art generator in Circles. After some debugging and back and forth with Abdallah we have discovered that the OpenAI API key we are using to access the image generator is getting flagged as exposed on Github whenever we push it and that is deactivating the key. We looked for some solutions to eliminate this problem but without extensive research it seems we'll probably just have to copy the key in and out of our local sessions between pushes.
Over the last week my main challenge was the bugs we were having with the AI art generator in Circles. After some debugging and back and forth with Abdallah we have discovered that the OpenAI API key we are using to access the image generator is getting flagged as exposed on Github whenever we push it and that is deactivating the key. We looked for some solutions to eliminate this problem but without extensive research it seems we'll probably just have to copy the key in and out of our local sessions between pushes.
As I write this, I just finished my A3 which means that tomorrow morning I can work on the term project and I hope to try out some of the CirclesXR components tomorrow morning. With that complete, our goals for the week will be complete. After that, we'll go into a development sprint over the next week to finish the prototype in time for the Beta Presentation (which might have to be 2 days early due to my schedule).
Hours Worked
- 2 hours in team meetings and working with teammates
- 2 hours of programming and debugging Circles
- 0.5 hour managing Trello, updating tasks and giving feedback
- 2 hours Stable Diffusion research
- Total: 6.5 hours.
- 2 hours in team meetings and working with teammates
- 2 hours of programming and debugging Circles
- 0.5 hour managing Trello, updating tasks and giving feedback
- 2 hours Stable Diffusion research
- Total: 6.5 hours.
Here are some interesting images I made with Stable Diffusion this week with a focus on realistic(ish) faces...



ENTRY 04 - FEBRUARY 28TH, 2023
Overview
This week after the deadlines for the alpha presentation and the project proposal, things are moving along slowly. The programming team has been working on migrating our AI image generation demo into a CirclesXR world and making it work there. We are also ready to import some of the models for the museums into A-Frame / Circles and start building our environments in them. In the days leading up to the final deadline for the project proposal and alpha presentation I have to say I was again a little disappointed with the lack of participation. We had a good group meeting just before the presentation where we put together a nice slideshow but I took on a majority of the responsibility when it came to the document and gathering all the pieces in time for the deadline. I assigned some tasks in class on the Friday for the duration of reading week and from what I've heard they're going well. The modelling team has some more models to show, Jelena put together some frameworks around what exhibits we're going to display in the museum and Abdallah and I are passing the repo back and forth working on Circles integration.
This week after the deadlines for the alpha presentation and the project proposal, things are moving along slowly. The programming team has been working on migrating our AI image generation demo into a CirclesXR world and making it work there. We are also ready to import some of the models for the museums into A-Frame / Circles and start building our environments in them. In the days leading up to the final deadline for the project proposal and alpha presentation I have to say I was again a little disappointed with the lack of participation. We had a good group meeting just before the presentation where we put together a nice slideshow but I took on a majority of the responsibility when it came to the document and gathering all the pieces in time for the deadline. I assigned some tasks in class on the Friday for the duration of reading week and from what I've heard they're going well. The modelling team has some more models to show, Jelena put together some frameworks around what exhibits we're going to display in the museum and Abdallah and I are passing the repo back and forth working on Circles integration.
Challenges and Successes
Once we had the world migrated into the CirclesXR fork, we had some issues with the AI script. It seemed to be still executing and properly referenced but the image wasn't showing up. I ran some tests on a simple A-Frame world with some of the models we will be using and they seem to work on my computer but I would like to also test it out on the Meta Quest. Generally things have been pretty smooth, just working through some bug-fixing. According to our schedule it looks like we are on time for these deliverables and I'm looking forward to the final assignment being over with so I can really dig my teeth into the term project.
Once we had the world migrated into the CirclesXR fork, we had some issues with the AI script. It seemed to be still executing and properly referenced but the image wasn't showing up. I ran some tests on a simple A-Frame world with some of the models we will be using and they seem to work on my computer but I would like to also test it out on the Meta Quest. Generally things have been pretty smooth, just working through some bug-fixing. According to our schedule it looks like we are on time for these deliverables and I'm looking forward to the final assignment being over with so I can really dig my teeth into the term project.
In addition to working on the term project, I've been really trying hard on A3 and trying to learn how best to make the networking work. I also did some research on AI art by setting up my own Stable Diffusion server and trying out some custom prompts. I would like to look into getting this set up through the API and being able to request images from Stable Diffusion as well as Dall-E and maybe Midjourney eventually. Ultimately, things will start to really take off when A3 is out of the way but I appreciate the extra time to work on it as I'm enjoying the challenge of setting up my own multiplayer web server.
One last thing, I accidentally double-booked myself for the team meeting this week and therefore missed the progress update, so there may be more done than I am letting on here. I will find out tomorrow morning!
Hours Worked
- 9 hours working on the final documentation for Project Proposal
- 2 hours working on the slides for Alpha Presentation
- 4 hours debugging, programming, migrating to circles
- 3 hours of AI generation research on various APIs
- 1 hour of team managing and updating task boards
- Total: 19 hours
- 9 hours working on the final documentation for Project Proposal
- 2 hours working on the slides for Alpha Presentation
- 4 hours debugging, programming, migrating to circles
- 3 hours of AI generation research on various APIs
- 1 hour of team managing and updating task boards
- Total: 19 hours
For my media reflection piece this week, here's a very interesting YouTube video on the history of AI image generation:
ENTRY 03 - FEBRUARY 14TH, 2023
Overview
After working on it a bit in class last week I am proud to say we have a working prototype of AI image generation in WebXR. We will be cleaning up the demo for presentation on Friday at the Alpha Presentation and are hard at work on our documentation for the Design Document and User Interaction Specification. I met with the team yesterday again and moral seemed very low. I think the Human Computer Interaction assignment they had all just rushed to finish had tired them out. I find myself in the fortunate position here of only having the one class to worry about and so I have been trying to lay out structures and frameworks to help assist progress on the documentation. I have to be honest though, I have done most of the work so far. My next step is to attempt to get the team to do some quality work tomorrow morning in class. I know that we will have a good document by the deadline but I am stressed about the lack of progress between this week and last.
After working on it a bit in class last week I am proud to say we have a working prototype of AI image generation in WebXR. We will be cleaning up the demo for presentation on Friday at the Alpha Presentation and are hard at work on our documentation for the Design Document and User Interaction Specification. I met with the team yesterday again and moral seemed very low. I think the Human Computer Interaction assignment they had all just rushed to finish had tired them out. I find myself in the fortunate position here of only having the one class to worry about and so I have been trying to lay out structures and frameworks to help assist progress on the documentation. I have to be honest though, I have done most of the work so far. My next step is to attempt to get the team to do some quality work tomorrow morning in class. I know that we will have a good document by the deadline but I am stressed about the lack of progress between this week and last.
Challenges and Successes
Over the course of the last week the team has been mostly focused on documentation and getting all of our ideas together for the project proposal. We have big ideas and are trying our best to contain the scope here but I think based on some of the modelling and concept art that I've seen from some teammates we are going to end up with a cool looking final product! Challenges mostly fall under the category of the above-mentioned lack of work ethic. I've been trying to reach out individually to teammates to make sure everything about the group dynamic is good and that they feel like their roles make sense. It seems like everybody is happy with where they are, just drowning in other projects both for this class and others. I work well under tight deadlines and I will do my best to push the team to succeed here.
Over the course of the last week the team has been mostly focused on documentation and getting all of our ideas together for the project proposal. We have big ideas and are trying our best to contain the scope here but I think based on some of the modelling and concept art that I've seen from some teammates we are going to end up with a cool looking final product! Challenges mostly fall under the category of the above-mentioned lack of work ethic. I've been trying to reach out individually to teammates to make sure everything about the group dynamic is good and that they feel like their roles make sense. It seems like everybody is happy with where they are, just drowning in other projects both for this class and others. I work well under tight deadlines and I will do my best to push the team to succeed here.
We are happy to have a working prototype though, and based on what Dylan showed us of his assignment 2 demo we are looking at potential for a very cool environment to compliment it. Abdallah and I have been discussing ideas back and forth for what custom A-Frame components will be required. I still have Jelena assigned to primarily documentation which she claims she's happy with but I haven't seen much beyond some introductory paragraphs from her. I plan to have a chat with her this week in class if I can. She also showed up two hours late to last class and then decided to leave shortly after we wrapped up our team meeting. I think something else might be going on in her life interfering with her participation. Last week I also encouraged Dylan and Nadia to have a meeting about 3D asset creation. This was because Nadia had just joined the group and Dylan has big ideas so I want to make sure that she gets to share in some of the asset creation. The two of them did end up meeting and Nadia reported back to me that she was happy with the list of assets she had to create, so that's a small success!
Finally, I wanted to mention that the last two Fridays our team hasn't been in class by choice (a group vote, unanimous) to work from home instead. This is great, but next time this is considered I am going to try and get the team to work together during that time rather than using it for other projects. Hopefully this will boost productivity.
Hours Worked
- 6 hours collectively working on documentation
- 3 hours in meetings and communicating with teammates
- 2 hours total organizing Trello & documentation for others
- Total: 11 hours
- 6 hours collectively working on documentation
- 3 hours in meetings and communicating with teammates
- 2 hours total organizing Trello & documentation for others
- Total: 11 hours
Below, a short video showcasing the final working demo of our AI image generation... noting how long it takes to load the images, we will need to get good at having an entertaining environment for users to wait in!
ENTRY 02 - FEBRUARY 7th, 2023
Overview
Week two of this project was a little slow. I'm writing this on Tuesday afternoon and last night when we had a team touch base meeting, not much was done. A lot of the members of the group were caught up in other assignments as well as A2 for this class and didn't get to their deliverables yet. Abdallah and I both worked on our programming deliverables over the weekend and we have come to the conclusion that it will be possible to generate images with AI in A-Frame, however there have been a lot of issues and various problems to solve to get here. We are also anticipating that it might involve adding some additional dependencies. The 3D modellers of the group have a few models done but not too much and our document had no progress on Monday night but I was promised some would be done before Wednesday.
Week two of this project was a little slow. I'm writing this on Tuesday afternoon and last night when we had a team touch base meeting, not much was done. A lot of the members of the group were caught up in other assignments as well as A2 for this class and didn't get to their deliverables yet. Abdallah and I both worked on our programming deliverables over the weekend and we have come to the conclusion that it will be possible to generate images with AI in A-Frame, however there have been a lot of issues and various problems to solve to get here. We are also anticipating that it might involve adding some additional dependencies. The 3D modellers of the group have a few models done but not too much and our document had no progress on Monday night but I was promised some would be done before Wednesday.
Challenges & Successes
In terms of my portion of the project, I have been back and forth collaborating with Abdallah on trying to get a basic implementation of AI image generation working in A-frame. I chose to spend all of last Friday working on the term project (instead of attending class) and during that time I had a series of errors as I was working towards the goal. First, I added some primitives to the scene to represent a basic little AI generation machine (image below). This then lead to me trying to see if I could get the image source from a URL (because Abdallah had made a script that generates AI images and it returns a URL). I could indeed get a sample image to load from URL into A-frame so working with this knowledge I started to integrate Abdallah's script. I began by attempting to build a component with the 'openai' Node package. This lead me to quite a few errors involving the difference between 'import' and 'require' in JS. When I tried to add a require statement to the component, it was not allowed. I then also tried to add import statements to the app.js and reference a function there from the component but that wasn't working either. Eventually I landed on moving the script for openai to the root of the directory and using require() to import the openai library. This worked. So I started building a script called open-ai.js that just includes a function to request the image and then I made it an export function and worked on a separate component (open-ai-image-gen.js) that then calls the function. This didn't entirely work either for reasons I can't recall because half way through troubleshooting it I decided to just use a REST API call to get the image instead of the library entirely. This fully worked! I was able to send a call and get a new link generated and then replace the source of the image with the link... but I ran into a CORS error (even with cross-origin: anonymous) that wasn't allowing me to get the link from openai but I could get the other link that I had in there just fine. At this point it was nearing the end of my day and I let Abdallah know these issues and passed the baton to him. I spent a total of 9 hours working on the assignment on Friday.
In terms of my portion of the project, I have been back and forth collaborating with Abdallah on trying to get a basic implementation of AI image generation working in A-frame. I chose to spend all of last Friday working on the term project (instead of attending class) and during that time I had a series of errors as I was working towards the goal. First, I added some primitives to the scene to represent a basic little AI generation machine (image below). This then lead to me trying to see if I could get the image source from a URL (because Abdallah had made a script that generates AI images and it returns a URL). I could indeed get a sample image to load from URL into A-frame so working with this knowledge I started to integrate Abdallah's script. I began by attempting to build a component with the 'openai' Node package. This lead me to quite a few errors involving the difference between 'import' and 'require' in JS. When I tried to add a require statement to the component, it was not allowed. I then also tried to add import statements to the app.js and reference a function there from the component but that wasn't working either. Eventually I landed on moving the script for openai to the root of the directory and using require() to import the openai library. This worked. So I started building a script called open-ai.js that just includes a function to request the image and then I made it an export function and worked on a separate component (open-ai-image-gen.js) that then calls the function. This didn't entirely work either for reasons I can't recall because half way through troubleshooting it I decided to just use a REST API call to get the image instead of the library entirely. This fully worked! I was able to send a call and get a new link generated and then replace the source of the image with the link... but I ran into a CORS error (even with cross-origin: anonymous) that wasn't allowing me to get the link from openai but I could get the other link that I had in there just fine. At this point it was nearing the end of my day and I let Abdallah know these issues and passed the baton to him. I spent a total of 9 hours working on the assignment on Friday.
In addition to Friday, I also spent some time going back and forth with Abdallah over the next couple days as he attempted to find a solution. He tried some other things with socket.io and some janky workarounds but nothing quite worked. Then, after the meeting last night I told him about the generation machine I had been working on and how close it was. Then he ran some tests on that and now we know that the CORS error persists even if we download the file. The bright side is that if you use a chrome extension to disable CORS, the app works! But we don't think this is a permanent solution and would like to discuss further with you. I also intend to polish my code and do some further testing with the open-ai.js script I wrote. Maybe there is a solution there.
Hours Worked
- 9 hours of programming on Friday
- 3 hours of additional programming work throughout
- 1 hour meeting with the team
- 1 hour collectively updating Trello & Github
- Total: 14 hours
- 3 hours of additional programming work throughout
- 1 hour meeting with the team
- 1 hour collectively updating Trello & Github
- Total: 14 hours
Here's a screenshot of the simple AI generation testing rig:

ENTRY 01 - January 30th, 2023
Welcome to the first entry of my development blog for Design Studio 3. Not a whole lot has happened with respect to the project so far, but I have managed to form a group. We're calling ourselves the 'Neural Ninjas' because we've chosen to focus on "The History and Future of Using AI for Art" as a topic for our VR experience. The team consists of myself, Dylan Feim and Abdallah Abou-Chahine. I think it's a well balanced group. Though all three of us have some sort of art expertise as well as some decent programming skills, Dylan seems to have the best art skills and experience with 3D modelling so he will likely be fulfilling that role. Abdallah seems to be a good programmer and mentioned that he has a history working with Three.js which of course is the foundation of A-Frame so that should be a good asset to have. As for me, I'll act as a wildcard and the team's Project Manager. These roles are not finalized yet but that's what I'm thinking so far.
We spent some time in class on Friday looking at Github projects and exchanging information. We ultimately came to the conclusion that Github projects didn't have enough features for task tracking and we decided to pivot to Trello. While it would have been nice to use Github projects because of the integration with the repository's issues section, we prefer the flexibility and automation that Trello offers. We also went ahead and created a repository for our term project to live in and the link is provided via the above button. Also in class, we brainstormed some ideas on what to choose as a topic for our term project and so far we're thinking something along the lines of the history and future of Stable Diffusion - the AI model. We think it could be super cool to create some generative art in AI and also provide some education around the history of AI image generation. As I'm writing this, we have a meeting shortly to do some further brainstorming. I'd say that after the meeting I will have spent around 5 hours working on the project so far.
We spent some time in class on Friday looking at Github projects and exchanging information. We ultimately came to the conclusion that Github projects didn't have enough features for task tracking and we decided to pivot to Trello. While it would have been nice to use Github projects because of the integration with the repository's issues section, we prefer the flexibility and automation that Trello offers. We also went ahead and created a repository for our term project to live in and the link is provided via the above button. Also in class, we brainstormed some ideas on what to choose as a topic for our term project and so far we're thinking something along the lines of the history and future of Stable Diffusion - the AI model. We think it could be super cool to create some generative art in AI and also provide some education around the history of AI image generation. As I'm writing this, we have a meeting shortly to do some further brainstorming. I'd say that after the meeting I will have spent around 5 hours working on the project so far.
As a piece of media to share for the first week, below is the AI generated art that we made based on the phrase 'Neural Ninjas' and are using as our group photo.

Also, it's worth mentioning that the name 'Neural Ninjas' was chosen from a list of AI generated suggestions... We're really sticking to the theme! Haha.
END ENTRY