GenCon (https://www.gencon.com/) is a (normally physical) convention for tabletop gamers (like Duungeons & Dragons) with extensions to all sort of other role play games. It has been running since 1968 when it started in Lake Geneva, Winconsin, USA, and is usually held in Indianapolis. Due to CORVID-19 the even is being held virtually from July 30th to August 2nd, 2020. See https://www.gencon.com/online.
Mozilla Hubs is a virtual collaboration platform that runs in a web browser on desktops, tablets and mobile devices. It can support wireless standalone VR headsets. Desktop VR setups like the Oculus Rift are supported via a WebVR compatible browser (e.g. Firefox). See list of supported devices.
Simple 3D spaces can be selected from an initial range, or created in a tool called Spoke. Other users can be invited to join in the room using a shared URL. The room can have 3D objects, media and video screen facilities, etc.
Lost Horizon has been created by the team behind Glastonbury’s Shangri-La festival within a festival, and in conjunction with VRJAM and Sansar to be “the world’s largest independent music and arts festival in virtual reality. Lost Horizon is a REAL festival in a virtual world. A fully interactive and multi-stage event to explore… raising money for The Big Issue and Amnesty International”.
ShangriLa – Glasto – Day 1 – July 3, 2020
The Tech: I was using Oculus Rift via Sansar to attend the festival. Access worked quickly and without glitches. After arrival at the ShangriLa entry portal, there were teleport portals to the four stages (Gas Tower, Freedom, Nomad and SHITV) and the Art festival area. The performers looked like they were performing against a green screen so they could be placed into the stage area of each performance space, and appeared as 2D video streams on stage. The stage area was unreachable by users so the illusion was maintained.
On entry through the portals to each stage or experience the user was placed in a sharded version of the experience. Users will normally end up in an arbitrary shard. Each shard had a Sansar official helper who was there to answer questions in chat (nice feature), a different official in each shard as far as I could see. There were perhaps 40 to 60 avatars in each experience, though a lot of NPCs were scattered around to make it look much busier. Voice was active in each shard so people could chat (or shout) over the music. Emotes can be used for dancing animations. The entry area can be bookmarked as a “Favourite” location, and if selected prior to teleport, the “instances” available can be listed along with the number of avatars in each, prior to selecting a specific one to enter. The specific instance URL can be shared with friends to let them join you without ending up in a separate shard.
The experience was very well designed and performed better in comparison to the VRChat-based Jean-Michel Jarre VR concert on June 21st, 2020 [See Blog Post].
ShangriLa – Glasto – Day 2 – July 4, 2020
The festival continues to work really well. It may be worth noting that larger performing groups need to be careful they don’t step outside the field of viewer before their green screen. Make some marks on the flor for the area covered by the camera. Otherwise the outermost performers can get clipped by the edge of the frame breaking the immersive effect, which is otherwise good, so long as you don’t get close to the stage and loom from one extreme side at an angle. Then the performers can look a bit flat 🙂
It is also nice that the community is not so large that you are likely to bump into old friends from Sansar, Second Life or OpenSim, which enhances the community festival experience.
Jean-Michel Jarre and his Avatar engaged in a Live VR performance using the VRChat platform to celebrate Fête de la Musique 2020, June 21st at 21h15 CET. The event took place in the “VRRoom” area of VRChat.
Some notes on the experience… I was using Oculus Rift on Windows 10 with a Xeon processor, 64GB of memory and Nvidia GTX1080 GPU. After the advertised time there was a 15 minute delay before the event started. I monitored a YouTube 2D feed to see when it actually started as that was unclear in VRChat itself. Even after the 9pm CET time at which users were asked to relog to get the performance space build, I had to relog four times to get into an instance where the performance was streaming. Other instances had a few avatars also looking for a live performance area. Jean-Michel’s “avatar” actually appeared as a 2D video projection onto a flat screen behind the synthesizer props in the shared spaces for attendees… with about 20 to 30 people in each shard. The instance crashed once and required a restart and reenter via the arrival lobby and entry stairs again. The sound was fine when in a proper live instance, though avatar to avatar chat was audible over the performance, and would need individual avatar muting to supress it. A way to supress all “attendee” avatar chat and still have the performance stream audible, perhaps on a separate easily clickable button, would make sense in such performances. There were interesting visual effects like swirls of the field of view, projections, warp effects, etc. These were all clearly displayed in the VR Headset.
Posted inMedia, VR|TaggedMedia, VR|Comments Off on Alone Together – Jean-Michel Jarre
Week seven is the final week of the EdMOLT course.
I caught up on the various discussion forums which I had not been on for 6 days or so.
I also checked if anyone in my “Team” for the joint exercises had checked in on any of the modalities available and no one had except the course tutor. I suggested that a way to address lack of involvement by people in some teams might be to mid course ask other team if they were willing to invite across individuals who found themselves in inactive groups.
A useful resource in the week seven materials is a web site for “An Edinburgh Online Teaching Toolkit – Resources for teaching online”…
Firestorm 22.214.171.124799 with OpenSimulator viewer is a Beta test viewer which includes support for EEP (the Environmental Enhancement Project) and support for the Chrome Embedded Framework Live Video Streaming features in Second Life and OpenSim. VR Mode in this new version acts in the same way as in Firestorm VR Mod 6.3.9. See https://github.com/humbletim/firestorm-gha/releases/tag/v6.4.5-vr-alpha-0
Peter Kappler maintains the Firestorm VR Mod Viewer and his source code modifications to allow the Firestorm Viewer to work with VR headsets at https://gsgrid.de/firestorm-vr-mod/ – go there to download his latest version and for usage information, source, advice on trouble shooting, etc. For community support use the Discord Discussion Channel: P373R-WORKSHOP by p373r_kappler [ Invite ].
On that channel @humbletim has announced an automated scripted build system with help from @thoys using GitHub Actions (GHA) which merges Peter Kappler’s VR code additions into stock Firestorm and which can autobuild a release executable version. He has done that for Firestorm 126.96.36.199205 and 188.8.131.52799. See https://github.com/humbletim/firestorm-gha/. Look under the Releases tab and the installer is under the “Assets” chevron.
These install in to their own folder and use their own user Settings directory so that the VR Mod viewer can be installed alongside the standard Firestorm viewer. Note that if you want to import existing Firestorm accounts/settings you have to manually copy them over between AppData/Roaming/Firestorm_x64 and AppData/Roaming/FirestormVR_x64 folders.
As usual, Ctrl+TAB initially sets up SteamVR (and HMD support as needed), TAB is used to toggle VR mode on or off, F5 lets you select and step through the various VR HMD or user specific settings for IPD, texture shift to register the left and right eye images, and focal distance to change depth perception, etc. F3/F4 are used to increment and decrement each setting selection.
If you see a lot of hover tips showing under the mouse it could be that the debug setting “ShowHoverTips” is set to TRUE (the default) which may show something constantly under the mouse even for inert unscripted objects. You can turn that off via Debug Settings or via Preferences – User Interface – 3D World – Show Hover Tips.
On the Discord channel @humbletim on 28-May-2020 wrote:
For anyone wanting to compile from source I was able to get a combined Firestorm stock + VR Mod built using Github Actions (much thanks to @thoys for helping figure it out!).
Ahah… I notice a BIG difference when using 184.108.40.206799 VR Mod… as you tip you head from side to side when in VR Mode the UI elements like the HUDs and name/group tags correctly stay level. BUT the REFLECTIONS of in world items ALSO stay level which looks very odd. In 220.127.116.11205 they correctly stay as a mirror image of the objects. I wonder if the EEP code has altered the graphics layer on which reflections are placed in some way that needs a change to the VR Mod approach? Or if its a bug somewhere in the core Firestorm/Linden Lab rendering code.
Here are screenshots from FS 18.104.22.168205 VR Mod (left) and FS 22.214.171.124799 VR Mod (right) in VR Mode with HMD tipped to the side… click on images for full size versions…
It may be that Linden Lab moved the reflections onto a different graphics layer and the VR Mod code folks might be able to fix that… but it might also partially explain why there is a big frame rate drop in the EEP viewers… as we already know that turning off the UI layer improves frame rate a LOT…. just speculation at the moment but still… interesting.
Looks like its an issue in Linden Lab (and hence Firestorm) EEP code changes.. as it can be observed in the standard non-VR Mod viewers. It is rather obvious in VR Mode, but can also be seen in a standard view… if you place your avatar so its at an angle (e.g. on a pose stand and rotate it sideways) and go into Mouselook mode you can see the reflections are wrong… setup as in the following image…
Here are screenshots from FS 126.96.36.199205 (left) and FS 188.8.131.52799 (right) in Mouselook mode with avatar view tipped to the side… click on images for full size versions…
Checking out the materials for week six on Feedback and Assessment.
Checked out whether the group I am part of had made any contact on all channels, Teams area, Group Discussion Board and Group Padlet. Looks like I am the only member of the group, besides Micheal Gallagher the course instructor, that has tried any of the channels.
Start of a new week, start of a new blog post. TBA.
Engaged Learning Communities (Continued)
Group Work on the task of keeping a community engaged… checked out the Edinburgh Model Group 15 channels: discussion board, group e-mail via Teams, Teams and its shared Files area. I even added a pointer on the group padlet in case anyone sees that. But I see no activity at all except the input of the EdMOLT course leader Michael. Perhaps this is a good example of less than active interest in group work in some online learning communities and the lack of community in the channels available 🙂 Discord works well for something like this.
EdMOLT Week Five Drop-in Session on Blackboard Collaborate
Roth2 v2 Revision 2020-05-24
Based on Blender Mesh from https://github.com/RuthAndRoth/Roth2 (was DRAFT8_4)
Use a viewer which supports Bakes on Mesh, e.g. Firestorm.
Roth2 is a low-poly mesh body specifically designed for OpenSimulator and which can also be used in Second Life™. It is built to use standard Second Life UV maps using a scratch-built open source mesh by Shin Ingen, Ada Radius and other contributors from the RuthAndRoth Community. Roth2 v2 is the second version of the mesh avatar updated to be built and rigged using Blender 2.8 and with improved documentation of the workflow to make it reliably repeatable and credits to all the asset creators involved.
OpenSim:OSGrid RuthAndRoth Region hop://login.osgrid.org/RuthAndRoth/128/128/26
Roth2 v2 is provided as a single mesh that is designed to work well with Bakes on Mesh. It has a simple alpha capability without needing separate mesh parts and alpha masks can be worn to give more control over hidden areas. rather than use Bakes on Mesh, skin textures may be applied, but you should then add a full body alpha mask to hide the underlying system avatar.
The “Roth2 v2 Mesh Avatar” box contents are designed so that they form a complete initial avatar using Bakes on Mesh. You can switch to your own shape, skin, eyes and hair and/or use the HUD to change your appearance. Some example skins, hair, clothing and a range of alpha masks are provided in the “Roth2 v2 Extras” box.
Roth2 v2 uses a single combination HUD,created by Serie Sumei, for alpha masking, skin and eye texture applicatiinand other features. The skins and eyes that are available are set via a notecard (!CONFIG) in the Contents of the HUD which can be edited to incorporate your own skins (10 slots are available) and/or eye textures (5 slots are available).
The Skin Alpha Mode can be changed between Alpha Masking with cutoff=128 (the initial setting) and Alpha Blending. Depending on the Alpha Mode that is used on hair, clothing or other attachments that use partial alpha it may be useful to be able to change the setting used on the mesh body to avoid some parts not displaying correctly.
Roth2 v2 – Mesh Avatar – This is the normal distribution box and is designed so that once unpacked its contents can be “worn”. It does not contain basic “classic” avatar shape, skin, eyes or hair so that the users own existing underlying avatar setup is used. The extras box contains examples of the basic avatar items to select from if needed.
!README, !LICENSE and
Roth2 v2 Full (Body+Feet+Hands+Head)
Roth2 v2 Eyes
Roth2 v2 HUD
Initial skin, shape, basic eyes and basic hair
Dark gray underwear
A special version of the Roth2 v2 Mesh Avatar for OpenSim 0.8.2.1 is being prepared to address script elements which are incompatible with this earlier OpenSim version (from December 2015) which is still in use on grids such as 3rd Rock Grid and Littlefield Grid.
Roth2 v2 – Extras – This is a box of useful extra elements and options.
!README-EXTRAS and !LICENSE
Roth2 v2 Body (only)
Roth2 v2 Feet
Roth2 v2 Hands
Roth2 v2 Head
Roth2 v2 Headless (Body+Feet+Hands)
Roth2 v2 Head+Vneck (section of body)
Roth2 v2 Elf Ears
Dark grey underwear briefs and jacket length top
HUD debug script
Roth2 v2 – Resources – This box is not normally needed. It contains textures and other resources with original UUIDs as used within the other assets.. This can be useful of moving the assets across grid, or to repair elements.
!README-RESOURCES and !LICENSE
All skin and eye textures used in default HUD
Clothing – Underwear
Roth2 v2 – Mesh Uploads – This box is not normally needed. It contains mesh for all Roth2 v2 elements as originally uploaded and before attaching a root prim or any texturing.
!README-MESH-UPLOADS and !LICENSE
Collada (.dae) Mesh for all Roth2 v2 elements as originally uploaded and before part renaming, attaching a root prim or any texturing.
KNOWN ISSUES AND TROUBLESHOOTING
There may be a small gap or seem at the neck joint between the mesh body and the classic avatar or addon mesh heads.
Not all the appearance sliders will work on the mesh body and parts.
Roth2 v2 with attached Bento head will work with most shapes. The headless body, to use with system head or other mesh head, will work well with the sliders except body fat, and extremes to neck length and thickness, because of the neck seam. There are a few head sliders that don’t work: Head Shape, Ear Angle, Jowls, Chin Cleft. Things on the list for another release sometime down the road: figure out the neck issue, improve pointy ears.
Foot skin problems? For best result, paint over the system toenails and remove as much detail as you can from your foot skin that is probably designed for the system avatar’s duck feet.
HUD issues? The Extras box contains a HUD debug script. Add this to the HUD contents to allow for a long mouse press to bring up menu with diagnostic and further options.
Please contribute via the GitHub Repository and send your feedback by posting to the Discord Channel.
The main Roth2 v2 mesh components have an AGPL license and other components have Creative Commons or other open source licenses. Basically, you can use and distribute the materials as you wish, but any modifications to the AGPL meshes that are distributed or made available in a service must be made publicly available at no cost and released under the same terms granted in the LICENSE.
Various Authors and contributors to the Git Repository in alphabetical order are:
Other contributions and testing by members of the OpenSimulator and RuthAndRoth Communities.
The ‘R2’ logo may be used to indicate projects or products that are either based on or compatible with the RuthAndRoth project mesh bodies.
4.3 Design Brief Action Plan – Edinburgh Model Group 15
A task to explore collaborative development of a course plan to present to a board of studies to show how the team would involve students…
You are part of a new course team developing a course to be delivered online at the start of the next academic year. Due to student feedback and high online attrition at other universities, your Programme Board are particularly keen to support online courses that students becoming active participants of the University of Edinburgh community. You have to prepare an action plan for your Programme Board. The action plan will outline how teaching staff will ensure that students on the new online course feel part of the University of Edinburgh community and a part of their course and programme cohort.
Trying Microsoft Teams for coordination between the 4 team members and a course leader. Observation AT: Microsoft Teams is not a great tool for making team members aware of activity or fostering a sense of community or team spirit. I never turn on notifications from any tool as I am involved in so many online communities, so tools that are poor at summarising the status and progress of discussions or work items when you return possibly after some time away don’t work very well for me.
Some Inputs and thoughts… Our Community Orientated Online Course
1. a (brief) summary of the cohort demographics and geographical locations of the cohort as you understand them to be. This needn’t be precise, just a sketch.
Idea AT: Shall we assume international, multi-time zone, multi-cultural, multi-age?
2. a (brief) overview of community specific activity that you will design into this new course, and the virtual spaces where this activity will take place. You can focus on the community (University of Edinburgh), the cohort (the fellow students within this specific course or more broadly on your programme) or both.
Idea AT: How about getting them all to create their own digital artifact related to how the subject matter of the course relates to their own personal interests. Any format, any media. To briefly explain their personal interest and then how the subject matter or readings in the area might apply to or be used in their area of interest. When (or if) ready they can invite classmates to look at their artifact and give constructive inputs and ideas. A weekly virtual get together that runs over a 48 hour period across all time zones in a persistent meeting space with poster boards round the walls will allow anyone willing and ready to put up a poster with a URL to their artifact and this will indicate they are open to inputs from classmates. Classmates can make asynchronous inputs about posters to help their classmates. Some encouragement inputs and gentle persuasion from class leaders will be made as appropriate to keep up some momentum. Occasional video made to allow those without easy access to platform chosen to see what is happening. Ask for community volunteers to act as a bridge into the platform and posters for those without suitable access or skill.
3. a (brief) overview of how you might know the activity was working in was fostering a sense of community. Again, no need to be elaborate here but rather just a few metrics you might be looking for.
Idea AT: Look at how many people get to the stage of making their artifact accessible via a URL. Then how many take it to the stage or making a poster in the persistent meeting space for weekly sessions. And how many people actually comment and give feedback.
4. an indication of how your plan caters for inclusivity and accessibility.
Multiple time zones support, dip in access, any level of involvement, no need to go public with the artifact.
4.4 The 3 C’s – community, campus, cohort
Support for Communities of Practice. Notes…
We supported a community of several hundred people (a volunteer sub-group of a world wide community of some 1,300 experts in academic, government and industry) involved in supporting government agencies and multi-national efforts in emergency response situations. We explored and conducted usability and effectiveness evaluations of a set of tools for distributed collaboration suitable for supporting their inputs and activity in both training and live emergency contexts…
Whole of Society Crisis Response Community exercises e.g. with USJFCOM, Army Research Labs, Virginia State, Hampton City and others…
I actually started week three by catching up on the last two modules in week two. The comments on teacher presence in 2.9 were interesting, as I found that we spent a lot of time interacting with students and the weekly guest feature lecturers on the Coursera AI Planning MOOC we ran.
3.2: Time Online in Distance Education – Synchronous and Asynchronous interactions
I am keen on supporting both synchronous and asynchronous interaction in online educational contexts. We have in the past conducted research on distributed collaboration more generally and explored how communities interact, the tasks they perform together and the tool types that may support these tasks (Tate et al., 2014). So I found Watts (2016) interesting.
3.4: Contact time, expectations, and indicators – Padlet
I put a comment into the Padlet for this activity along with the Task/Cognitive Work Analysis figure from Tate et al. (2014) as shown above… “Task Accomplishment versus Time Online – It would be nice to see the tasks the students are tackling (or that the course leaders believe they SHOULD be tackling) and their level of accomplishment of those tasks as a measure of engagement and progress.”
My dissertation for the MSc in e-Learning (Tate, 2012) covered some aspects of activities and tasks and ways to support them for distributed teams working in various areas such as emergency response, and especially the training of such teams. An overall approach can be adopted from the “5E Instructional Model” (NASA, 2012) with a flow of Engage, Explore, Explain, Extend and Evaluate. Within this higher level cycle, a very useful set of learner activities specifically relevant to situated and social activity in a community of practice has been developer by Soller (2001, as shown in figure 6.1).
The educator can add appropriate constraints (limited by the activity which is possible via the affordances offered) and inject relevant events for learners to respond to (as shown in figure 8.1).
Hahaha… I should have read ahead.. this is a topic that interests me.. as can be seen from the above comments.. I tend to write these blog posts as a log of my learning activity and interaction with the course material.. hence my previous annoyance if they time out in the WordPress platform! I am saving blog post changes frequently after that little faux pas.
3.8: Case Study for Dealing with Multiple Time Zones
No single time works across the world, but having several clustered sessions where there can be decent and reasonable overlap can work well if you can get some overlap between the participants. having a persistent space where the meeting take place, where artifacts can be “pined” and seen by all participants can be helpful. But you know where I am coming from there with virtual world technology 🙂
NASA (2012) “5Es Overview: The 5E instructional model”, NASA Education Web.
Soller, A.L. (2001) “Supporting Social Interaction in an Intelligent Learning System”, International Journal of Artificial Intelligence in Education (IJAIED), Vol. 12, pp. 40-62.
Tate, A. (2012) ‘Activity in Context’ – Planning to Keep Learners ‘in the Zone’ for Scenario-based Mixed-Initiative Training, MSc in e-Learning Dissertation, Moray House School of Education, University of Edinburgh, 9th August 2012. [PDF Format]
Tate, A., Hansberger, J.T., Potter, S. and Wickler, G. (2014) Virtual Collaboration Spaces: Bringing Presence to Distributed Collaboration, Journal of Virtual Worlds Research, Assembled Issue 2014, Volume 7, Number 2, May 2014. [PDF Format]
Watts, L. (2016). Synchronous and asynchronous communication in distance learning: A review of the literature. Quarterly Review of Distance Education, 17(1), 23.
The blog post said it had timed out when I was updating it after adding three or four paragraphs. That’s is very bad. I find its really difficult to get the motivation to repeat a lot of comments after they have been made once.
It is Thursday of week 2 and I have just carved out some time to look at Week Two course materials I am engaged on an international collaborative project to produce open source educational resources for collaboration in OpenSimulator and Second Life. I am acting as a coordinator, editor and tester and this week a lot of the materials have started to come together.
2.2: Into the Unknown: On the “Unknowns” Padlet activity I noted the “unknown” as I see it of having little sense of who else is engaged with the study group or class. Maybe I need to dip again into the Discussion Forums.
Discssion Forums: Just dipped back into the discussion forums. It nice and clearly marks replies or comments you have had on previous posts. So you can go an read or comment back on the inputs. But I don’t at the moment see a simple way to get back to my own previous inputs or comments. I am sure that functionality must be there somewhere.
Claudia commented that Second LifeSecond LifeSecond Life was not up to much.. so I pointed her at a blog post written for a Head of School in the University who wanted to know how we used virtual world in the University…
2.3 and 2.4: Disengagement Cases: Having to retype and remember what I said since my previous updates were lost when post editing “timed out” and no history of the changes were in the revisions list. I thought WordPress occasionally saved in progress versions there. Maybe not.
Essentially my thoughts were that student self help and off course communication channels were good, but some way to ask for friendly students to bridge the gap and raise issues they see in a way that does not name individuals would be good. Drop in sessions timed so that they work for those with home and social responsibilities, or perhaps travelling (in the past!) would be helpful.
2.7: Mitigating Transactional Distance: s you know I am keen on persistent shared social and interaction spaces. So for the “Padlet” activity I suggested a “virtual coffee room” (our American colleagues might call it the “Water Cooler” area) – Have a shared persistent space where folks can feet, chat, post notes and provide some sense of community and continuity. Links from this out to the online resources by giving them a “physical” point of reference.
2.8: Teacher Presence: is important. The comments on blog posts is a great way to have the student feel they are involved and that the teacher is interested and present.
2.9: I actually did not get to this until the beginning of week three comments on teacher presence in 2.9 were interesting, as I found that we spent a lot of time interacting with students and the weekly guest feature lecturers on the Coursera AI Planning MOOC we ran.
Screenshots – To utilize Nvidia ShadowPlay if you have a suitable GPU, just press the keyboard combo Alt + F1 to take a screenshot, or tap Alt + Z while in-game to bring up the ShadowPlay settings for when replay videos are automatically saved. Location is usually C:\Users\name\Videos\Fortnite. It does not work in all modes as ALT brings up a game HUD.
Dillon Francis, Steve Aoki, and deadmau5 are coming to Party Royale with back-to-back-to-back sets LIVE on the big screen at the Main Stage. Hit the dance floor, chill with friends, or jump into activities in Party Royale (8-May-2020). To join the party, select the “Party Royale” playlist in Battle Royale.
Posted inGames|TaggedFortnite, Games|Comments Off on Fortnite – Party Royale – Social Space
Launched today (30-Apr-2020) is HTC Vive Sync (https://sync.vive.com/) – which the web site states is an “all-in-one meeting and collaboration solution for VR. With VIVE Sync, it’s easy to customize your avatar, create a private meeting room, and begin working face-to-face with colleagues around the world. And with our suite of 3D interactive meeting tools, you can review 3D interactive content in ways that have never been possible”.
Currently, Sync only supports the Vive ecosystem of headsets – the HTC Vive, Vive Pro, Vive Focus and Vive Cosmos. HTC says it plans for future upgrades to the tool to include support for Oculus Rift, Oculus Quest, Valve Index and Windows Mixedf-Reality headsets.
Posted inVR|TaggedSync, Vive, VR|Comments Off on Vive Sync – VR Meeting Spaces
1.8: I am an enthusiastic supporter of avatar-based virtual worlds (such as Second Life and the open-source OpenSim platforms) as shared collaborative spaces and they work well in a lot of educational contexts… and have used them for class meeting and discussion spaces, for seminars, rain storming sessions, etc.
These spaces can be set up so there are many separate isolated spaces, and can even allow student subgroups, say those speaking a specific language or those from some specific region to have their own space with isolated text chat and voice facilities. These can be themed to be fun spaces for those who like that.. say a coffee area, campfire setting, a beach with a lovely sunset view, or perhaps out on a yacht in the bay.
Students can hang out in these spaces, drop in, etc. the sense of presence and ways to share information can increase (see Tate et al., 2014). Some may find the abstraction of an avatar odd, especially at first and if they have not been used to playing with a character in online and computer games. But this level of abstraction and spatial separation can also be of benefit especially in some cultures that may not support some forms of social interaction.
1.9: Too many folks who have not used online platforms and been involved in collaborative community orientated courses and activities think online is a poor alternative to “face-to-face” teaching. They assume sole learners in isolated contexts working alone with lists of videos, some even simply recorded from the back of lecture theatres! This is FAR from my own experiences. I believe online community orientated learning in a mix of synchronous activities and asynchronous study is a preferable alternative to typical lecture theatre and group tutorial style activities which suit some but not all learners.
1.10: See above… and why not have a virtual worlds “Edinburgh” location to anchor the experience of “being at Edinburgh” for our students… see Virtual University of Edinburgh.
1.11: Blogging… and here we are… the end of week one. I am already a blogger and often use blog posts to bring together resources, links I want to recall later, notes, screenshots of software and tools I try, hints to help use those tools, etc.
My reflection on the week is that I have no sense at all yet of the other people involved… beyond a few scattered entries I have seen on the Padlet activities in week one, many of which give me no idea of who is I the community, what their interests are, or even the size of the community. The discussion forums and commented upon there is so busy I have not (yet) got a sense of how to use it wisely or quickly catch up. I think having a large number of threads and topics is unhelpful in this respect. We have found that also when using Discord for some online communities too. Some people have just a few threads that people can see and catch up with or skip. Other Discord communities have massive numbers of super fine topics and its too much to get an overview of what is happening.
Blackboard Collaborate Drop In Session
On 1-May-2020, the first of a planned weekly “drop in” teleconference session took place in Blackboard Collaborate. I used Firefox on Windows 10 to run this, as I have previously found that Internet Explorer and Internet edge have issues accessing my microphone.
2.x Peek Ahead
I see that week 2 and 3 materials are already in place. I am not sure if that is a good idea and would like to see discussion on the value or otherwise of this. It means some students may be racing ahead and commenting on forums, etc out of sync with others in the community. I appreciate e want a high degree of asynchrony to accommodate individual participants time and availability… and its good to have catch up periods to get the community in sync at certain points. When the course description said there would be 4 weeks of study over 7 weeks, that is what I thought might happen.
Tate, A., Hansberger, J.T., Potter, S. and Wickler, G. (2014) Virtual Collaboration Spaces: Bringing Presence to Distributed Collaboration, Journal of Virtual Worlds Research, Assembled Issue 2014, Volume 7, Number 2, May 2014. [PDF Format]
I added a couple of entries onto the “Padlet” of a world map showing where people have been, are and are going. Its the first time I have used padlet.. my initial observation was that clicking on any pin to see what popped up without any context other than its geolocation was not very helpful. A short label of the contents or each pin, and a popup of the contents when you hover over it rather than having to click and dismiss the contents of every pin… there are hundreds… was unhelpful.
I like to get a sense of the community involved right at the outset and this activity did not really help me do that. I turned to the discussion forums to see who was involved and what their inputs were.
Its been only one day since the course started and there are already way too many separate forums, threads in forums and entries. It is difficult to get any sense of how to approach this. Its got so much material that the status of “unread” posts is already not very helpful. There are many questions there and its unclear how they relate to the course week 1 items and questions and whether they are supplementary or the same. Being at Activity 1.3 means I don’t yet know if those questions are worth looking at or if I will find them in 1.3 to 1.12 for week 1.
The discussion forums were more useful once I clear the backlog of activity in the last 24 hours. Volume in these sorts of forums is an issue, and too many concurrent multiple threads. A focus on a thread or two would work.
Padlet was also used for the activity in module 1.7, and this made sense to share the ideas participants had about various online teaching spaces they might use or envisage…
Welcome to my blog for the Edinburgh Model for Online Teaching Course – which I will abbreviate to EdMOLT). I am a Professor in the School of Informatics and have an interest in teamwork and remote collaboration. I have been involved in distance education for a while and have run a Massive Open Online Course (MOOC).
Fred Beckhusen of Outworldz and his team have done wonders again with another fine OpenSimulator-based region, released as an OpenSim Archive (OAR) licensed only for use on the DreamGrid distribution. Fred’s post on the MeWe – Outworldz Projects group on 28-Mar-2020 gives more details and the download link (not posted here so people should go to the MeWe post to understand the restrictions on usage).
The Hobbiton region and OAR is based on “The Hobbiton Collection” originally created by David Denny. This is an exclusive sim just for DreamGridders to use. David sells sims, so anyone wants this to run on their own grid can contact him. The description below is adapted from the region’s notecard and includes credits for elements used.
Fred Beckhusen, Debbie Edwards, Joe Builder, and David Monday worked to make the sim work smoothly, be totally free and also very beautiful. David Denny did a wonderful job on the layout and the plantings, which the team tweaked only a little bit. Fred redid a lot of the physics for smooth riding of horses and carts.
This is a 3×3 region (768m x 768m) with lots of places to go. There are Hobbits, Elves, Orcs, Trolls, Ents, Caves, Dragons and some surprises done in Animesh and NPC format. David Monday re-built the original Satyr Farm to where it mostly runs itself. The team tested most of the plants and there is even a candle making shoppe that is custom built. Fred replaced the Green Dragon Inn and the windmill with custom mesh buildings. The Green Dragon Inn is based on the current restaurant at the original set, and having been there in real life (in New Zealand near Matamata, see image below) I can say its a fine replica (it was extended for the Hobbit filming).
Walk the trails, look for the signs that say Photo Spot, and post some pics on MeWe or Twitter. Try not to get run over by the Ent, or eaten by the wolves, and watch out for Gandalf’s pony cart. There are teleporters for those who want to cheat and not explore on foot. Hint: Take the boat to the cave going North on the second waterway to the west.
Once an hour or so, Smaug will swoop down and toast your cows, which is worth waiting and watching for! If you don’t see him, TP to him via one of the teleporters and click his box to boot him up. Fred left a lot of NPC boxes visibly out as this is a Beta to test how it works. Provide feedback on MeWe Outworldz projects Group.
Hobbit of Hobbiton
The Hobbiton area contains a number of distribution boxes for avatars, one of which is a Hobbit… the Sting sword is from my own resources…
Adjusting the Sea Level
The Hobbiton OAR originally had a non-standard sea level of 24m. It can be adjusted to the usual OpenSim default level of 20m by loading the OAR with a Z displacement of -4m.
load oar --displacement "<0,0,-4>" Hobbiton.oar
This will leave the area flooded, as the region sea level is not automatically adjusted.
But after setting that in World -> Region Details -> Terrain -> Water Height the floods recede…
A couple of “vehicles” need to have their position changed by -4m and then the scripts reset (two Viking boats) and the “route” notecard in the Gandalf Pony and Cart. There had been some warnings of missing texture on region startup which were easily tracked down and corrected too. Fred Beckhusen also advised that the Teleporter script was not checking for more than 12 locations and thus showing script errors. The “Teleporter Prim” script in each teleporter pad was amended to ignore any entries beyond 12 buttons.
Project Leader: Fred Beckhusen, Outworldz, LLC.
The Hobbiton Collection is Copyright 2019 by Outworldz, LLC. and is licensed for free use only in the DreamGrid software. If you wish to use it outside the Outworldz system, please contact David S. Denny, firstname.lastname@example.org, who created the original collection about purchasing it. Many Thanks to Clarice Alelaria, David Monday, and Joe Builder for their contributions.
The farm scripts are licensed by Satyr Farm under the Creative Commons Attribution-NonCommercial (CC-BY-NC) 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. You are free to modify / create your own items for non-commercial reasons.
To the team’s knowledge all the objects and meshes are Freebies and they thank the makers for their work.
“Space Mountain, inaugurated at Disneyland Paris on June 1, 1995, is a ride based on Jules Verne’s novel ‘From the Earth to the Moon’. Passengers board a moontrain and are loaded into the Columbiad Canon which propels them into space. After avoiding meteorite showers and going through a giant asteroid, they reach the Moon, inspired by a movie by Georges Méliès. They are attracted briefly by it before plunging back into a series of tight turns. Slowed down by a contraption called “Electro de Velocitor”, they smoothly regain Earth.”
Another fine virtual world build from Fred Beckhusen/Ferd Frederix and his team (Debbie Edwards aka Nyira Machabelli, Joe Builder and Avia Bonne) to recreate Ancient Egypt, specifically Alexandria in 30BC. It is located on the OpenSimulator-based Outworldz grid’s Alexandria region (a 4×4 region, so just over 1km square).
A common issue in teleconferencing is testing your headset and microphone to make sure its working, and not giving feedback to others (something often only YOU cannot hear). As more people engage in teleconferencing and virtual world systems voice testing is important to be ready for virtual world meetings. The Second Life virtual world platform provides a nice test area to set yourself up and check you headset is working without having a partner to assist… go to
A white dot appears over you avatar when voice is enabled. The microphone icon, usually in the bottom button bar unless you have moved it, can be pressed to speak. In small group or 1-1 voice situations you may lock that on with the little square checkbox in the top left of the mic button, but its best to leave you mic muted when not speaking.
Notice the bars that appear over your avatar’s head as you speak… if they are green and show three or four bars each side your levels are probably good. 5 bars is nearing maximum volume. If red bars show you are over attenuating and your voice is likely to be distorted when heard by others.
Back in 2011, Kirstens Viewer for Second Life added an Anaglyph 3D view capability which could be seen with red/cyan 3D glasses. See this blog post. It was updated to the latest Second Life features in 2017. See this further blog post.
Kirstens Viewer has now been updated (10-April-2020 r1290) by Kirstenlee Cinquetti to use the very latest Linden Lab viewer code and includes experimental or development features such as Legacy Profiles. A fix for some issues (see below) was released on 12-April-2020 (r1300). Kirstens Viewer S23 6.23.1328 with 64-bit with other enhancements was released on 20-April-2020. Check on the latest developments as other updates have since been made, such as adding EEP support.
You can download the latest version of the viewer (32-bit Windows .exe only) using the “Download Now” link on Kirstenlee’s blog page… in the left hand column scroll to the bottom…
Click on any of the images below to see the full resolution version, make it full screen and view with Red/Cyan 3D glasses…
3D anaglyph mode needs to be off before login. If it is left on and you stop and restart the viewer it does not enter Second Life. You can start the viewer and toggle 3D anaglyph off in Preferences – S22 Features before logging in. A fix at Commit r1294 addresses this by always forcing stereo mode off before logging in. It looks like this if it sticks…
Also, when the Graphics settings for water reflection are turned on for anything other than “Minimal” artifacts appear in a small rectangle on the lower left corner of the screen…
Kitely Organizations are a way to create a virtual grid inside the OpenSimulator-based Kitely grid. It allows groups to create and manage their own users, with control over which regions they can visit and what they can do in-world. A Kitely Organization provides administrative capabilities that enable the management of groups of users and worlds under the organization’s control. Kitely’s Organizations are designed for companies, educators, roleplaying groups, etc.
Managed Users – users that are created by the Organization. The Organization Admins have full control over these users. Managed Users can’t login to the main Kitely website.
Independent Users – regular Kitely users who have agreed to join the Organization. The Admins only control what Independent Users do when they’re visiting the Organization’s worlds. However, the Admins can’t control what Independent Users do outside the Organization.
This allows the Firestorm viewer to be installed, if it is not already present, an initial avatar to be selected, and the Organization’s grid details and avatar username to be added to make entry easier for new users.
The grid LoginURI is of the form grid.organizationname.kitely.net:8002
RGU Neosome Kitely Organization
Kitely Virtual Worlds on Demand™
Kitely uses a mechanism of loading virtual world’s “on demand” so they use less server resources when not in use… if the world or region is not online when the first user arrives, their avatar appears at a Kitely Transfer Station” for a minute or so until the region is loaded, at which time the avatar is automatically teleported into that world.
RGU Neosome Oil Rig Immersive Training Environment
Login and look round, you usually arrive at an OSGrid Plaza. Follow the arrows on the floor to get an orientation and pick up starter avatars.
Open the Map (Ctrl+M), find the “OpenVCE” region and teleport there.
A virtual world social space for AIAI use is available on the OpenSimulator-based OSGrid platform. This is to help AIAI members to maintain contacts and have meeting spaces to share ideas andve at sn OSGrid Plaza wh resources while the University of Edinburgh physical premises are unavailable. AIAI members without an existing OpenSim grid avatar should obtain a free one on OSGrid (the OpenSim community grid) via http://osgrid.org.
The OSGrid OpenVCE region is open to look around since it is accessible from any Hypergrid enabled OpenSimulator grid using a map tool search for this “http://hg.osgrid.org:80 OpenVCE” or this “hop” in viewers which support that (e.g. Firestorm):
The facility uses the OpenVCE OAR, a ready to load open source virtual collaboration environment with a range of formal and informal meeting spaces, instrumented meeting rooms, exposition facilities, etc.
An AIAI group has also been established on OSGrid and can, if necessary, be used to restrict availability of some of the facilities or be used for group voice chat.
The original work to create the OpenVCE region was done on the US Army ARL HRED funded Virtual Collaboration Environment project by AIAI using Clever Zebra as a contractor/3D modelling group.
Tate, A., Hansberger, J.T., Potter, S. and Wickler, G. (2014) Virtual Collaboration Spaces: Bringing Presence to Distributed Collaboration, Journal of Virtual Worlds Research, Assembled Issue 2014, Volume 7, Number 2, May 2014 [PDF Format].
A recent check on the 3D models of Gerry Anderson’s Supercar and Black Rock Laboratory in OpenSim, on Black Rock region on OSGrid…
The models used in OpenSim are based on the original Cinema 4D models created in the late 1990s by Mick Imrie (mostly) and Austin Tate, and subsequently ported to Studio 3D Max by Mateen Greenway. See the Supercar 3D Models Page and these Construction Notes by Mick Imrie for more details.
Over 20 years ago Austin Tate worked with Shane Pickering in New Zealand to try to cerate the interior technical details for Supercar, consistent with the TV shows and annuals, etc. Shane had aerospace engineering knowledge and was a pilot… http://www.aiai.ed.ac.uk/~bat/GA/supercar-cutaway.html
After a few minor adjustments to the nose cone and vanes area, Supercar was out for another spin in Second Life on the Bellisseria Continent. Making use of the boat/vehicle rez zone near the lighthouse at Norse Auk and flying down the East Coast to my Ai Pad houseboat on the Damiano region.
And a shot with the 360 degree Snapshot viewer…
Supercar in OpenSim
I also put in place a small change to the control room in Black Rock Lab on the Black Rock region of both OSGrid and AiLand grid to better match still snapshots from the TV series…
Supercar puppet scale model made by Andrew Grimshaw of Wigan and displayed in the Dimension X Sci-Fi and Fantasy collectables shop in Hoylake in the Wirral near Liverpool, UK (at Unit 9A, The Quadrant, Hoylake, CH47 2EE) around October 2016 to February 2017 (shop opened in April 2016, now closed). The Model was listed on Twitter as “For Sale” in April 2017, current whereabouts unknown (unless you know otherwise?). Andrew worked as a model maker for the Thunderbirds 1965 Kickstarter funded project to create three new episodes of Thunderbirds based on audio stories. He also created a Thunderbirds FAB1 model for a TV advert for the Halifax.
Andrew Grimshaw of Wigan created the Model on behalf of Bruce Skelly (pictured to the right) who owned Dimension X. Some details of the construction of the model are in a blog post from West Kirby Today… By Emma Gunby on 27th September 2016:
Bruce Skelly, who runs the modelling emporium DimensionX, has discovered the long-forgotten plans for the original Supercar and is painstakingly recreating the vehicle, which will go on display at his shop in October.
He added: “No one knows where the original Supercar is, I think it must have been lost.
“One of my contacts found the original plans for the model from the series and so I am rebuilding it in all its glory to go on display in the shop.”
The Supercar model, which is estimated to be worth around £1900, is set to go on display at Dimension X in October alongside an original Fab 1 – the famous pink Rolls Royce driven by Lady Penelope in Thunderbirds.
Bruce, 64, a former publican, opened the shop after he retired to continue a passion, which began in his childhood.