Wednesday 8 June 2011

Video clips of Supreme Kidz that can be viewed from the channels.




 


Click here >>Submission_C.pdf

Wednesday 11 May 2011

Part of the experiment with creating an experience or evoking an emotion.

this was the failed attempt at making it into md2, instead we glued a video file to the image.
A simple animated object that shows movement.

possible outcomes we considered from this was when presenting to our client an idea of a detail within the augmented interior space
The tracking image we used to glue our testers on branching back to part of our idea where we wanted to create the interior space of another building in the existing real space we were in at a given time
The attempt at a fluid md2 model which we didnt save aas md2 on time so animated the video.

We are still in the progress of getting the animation as an md2 model that works with Junaio
Initially, David and I had the idea of using augmented reality to create a new interior space from what it is in reality to one of a different buildings interior within this real life interior space. We wanted to augment the reality so we could change the experience of the interior space. We did want to push it a bit further than that and make it so that the augmented reality we created we could engage in it to the extent that we had game characters that would recognize the scene and interacts within the real life environment like in a game of “hide and seek” where the augmented characters would hide behind things like real life chairs and tables. The augmented reality we wanted to create was a reactive environment that engaged us as much as we could engage with it as much as augmented reality let us.

That was the initial goal. But with the lack of knowledge about the intricacies of this technology, the goal of creating an environment that reacted to real time space was way outside our range of knowledge and expertise and overall a tad ambitious.
After consultation with Dermott Mcmeel, he advised our group to focus on one of the ideas we had in mind which was to concentrate on a space and changing the environment and experience one will have within that space using augmented reality. By changing the environment in augmented reality, we are trying to evoke an emotional response to that space which is vital to the idea of architecture within the built environment, the feeling of a space relative to the users.

 After reviewing our situation and the initial goal, with the lack of knowledge about the technology and also the amount of time we have to create an output, we have decided to simplify the idea right down to gluing animated objects and hopefully develop that further to gluing another built environment into another space.
At this stage we are experimenting with different animations and different evocative objects that animate that could evoke different emotional responses to hopefully increase our knowledge base with this new technology.

The main part of our media design David and I will be focusing on getting right and using it as a tool to bring to augmented life our idea is the process of gluing. We are at the stage of choosing experiences we want to evoke and trying to translate our chosen experiences into the blender then into junaio interfaces. The experiences we have been experimenting with in blender are as follows and the list is currently continuing.
We have created in blender a few animations dealing with both video files and 3d models that both deal with this idea of evoking a response.

Our first attempt at an animation was quite a simple animation, where it involved a box moving around in space. This worked for David and I which pushed us to look at things more interesting than a simple box in space.
One animation of water where the emotions we feel is that of fear, the fear of drowning. The problem we encountered was one with the interface and saving the animation as an .md2 file format, we could not get the object to save as an .md2 file in blender so we simply saved it as an .avi file format and glued that onto a channel. this can possibly lead us in a similar direction to what  Tsouknidas Nikolaos and Tomimatsu Kiyoshi of the Kyushu University 2010 did with video projections relative to the qr code. The idea of giving the video a coordinate so that the augmented reality camera can recognize the video without having to keep the qr code within the camera possible avenue for David and I to look at and possibly pursue.

Another problem however we keep on running into is the phase within the junaio qr code creator when it does not recognize or bring up the uploaded model or video. Most times it works but there are those occasions when the junaio doesn’t let us in. a factor that aided in our decision to simplify our media design idea.
Also we encountered an error with the gluing with respect to our model input and what we see when its glued. The junaio glue program does not show the whole animation i.e. the movement of our box fully. It cuts the animation mid-way through the animation which could inhibit the outcome we are wanting when we do choose our final product.

Our aim for our final submission is to create a space rather than just 3d objects that we animate to change the environments space. The feeling that we plan on exploring is still undecided but while doing these working prototypes, we will develop according to our skills set and ability with the technology.
So no longer David and I are looking for the vigorous engagement between augmented reality and reality through gaming but just trying to achieve a simple idea, one of changing the real space and placing within it another augmented environment. Whether that environment is static or not is still undecided while we are still getting to grips with the blender interface. Our idea is still quite vague but hopefully we narrow our ideas down to one.



Tuesday 5 April 2011

Junaio gluing

Junaio Glue experiment

The image we used that Junaio recognises for our 3d model