(This is just to further Beth’s discussion below about our previous blog)
I don’t think there is an option to transfer blog posts on this particular account, but for the first few weeks, we were using this blog to share initial inspirations and ideas for the project. You can also find our individual prototype and poster designs too!
We also discussed the possibility of map overlays being used for the mobile version. We decided that it may be best to not include them for the mobile because of server issues with loading content (georeferenced overlays are being pulled from the NLS site and slow loading times may occur) and perhaps may be overcomplicated for the phone version. A simple map with markers may be preferable for users on the go.
So, lots of great ideas, but little time left! I hope to implement 1 or two overlays for the presentation and for the April 27th submission, possibly more if time permits. I think the maps will add a great element to the feel of the project and perhaps contribute to the user experience.
Starting on Tuesday, we are going to have a DMSP-athon. 10 am until some point in the evening, followed by a second round on Wednesday, also beginning at 10 am.
Our schedule for the upcoming week:
Tuesday-work all day
Wednesday-finish and troubleshoot
We decided that as there is a bug with Cereproc, we’re going to make a few canned recordings, just to be on the safe side.
Everyone needs to send me their three bullet points for discussion by Saturday.
For our presentation:
The website must be completed
The mobile version must be completed, along with geolocation working and map
We need something for the voice. Even if the user has to press play when at a location. Either us an actor or mac voice synthesis.
At our supervisor/reality check meeting on Thursday, we had a good critique on how the project looks. John met with us to give his opinion on how the project looks. We are having difficulty getting the project to look right on different screens–it looks best on an iPad, is near impossible to read on an iPhone, and is good enough on a PC.
We are having a few disagreements on how the website should be put together (linked pages versus slides) but they are, hopefully, resolved.
Bing has put together a really good Flickr album with photographs for our website.
We are running into difficulty with the CereProc cloud API, and are meeting Monday afternoon to discuss.
As team leader, I have my concerns about the project being completed on time. I’m not sure if we’ll be able to get the Geolocation and voice synthesis portions completed on time for our presentation.
As we are presenting on Friday, 6 April, this is what absolutely must be accomplished by then:
-Mobile site up and running
Today we had a group meeting to discuss the progress we have made over the week. Wen has created the database and has successfully linked the markers with coordinates (yay!) Yi has made more progress on the website design and has added an ‘about’ page with information about the project & team. Jessica is continuing work on the map overlay and social media. Beth and Michelle are working on the CereProc cloud API while Esme is working on the animation for the website.
Some issues we are trying to solve is linking the marker points to specific Panoramas. We will be exploring use with AJAX and playing around with PHP to get this working. The map overlay is proving to be more difficult than expected and may need to seek help with these problems during the next week.
We set a date for the presentation and discussed presentation plans. We hope to hold the presentation in the atrium, using the projector to introduce and demonstrate the website. We then will take a group to the Royal Mile to test out the mobile-version of the application. Alternatively, users can stay in the Atrium and use web version.
Tomorrow we will be meeting with the group for a reality check with the supervisors.
Had a very successful group meeting today. Much progress has been made on our website (way to go Yi!), databases (good job Wen) and map (yay Jessica and Wen).
We still have
1. Fix links on map–click & move to panorama and text
2. Change Tour Time to: Short, Medium, Long
a. Short has texts 1-8
b. Medium has texts 1-16
c. Long has texts 1-24
3. Literary High Street Flickr Account–create
4. CereProc (read documentation)
5. Acknowledgment page
6. Mobile version of website
A week ago Sunday, Jessica, Bing and myself took to the Royal Mile. Armed with Bing’s camera and the Zoom video camera I’d borrowed, we photographed several panoramas for use in our app and website.
We also took several still photographs for use in the site and for documentation’s sake. I videotaped some of the photographing, along with some ‘talking shop’ where Bing told Jessica and myself about the mechanics of photography. I also filmed some street shots, of people walking and the Royal Mile’s life–bagpipers, tourists, students, etc., for use in our ‘commercial’ we intend on pulling together with our final submission. We will be making a 90 second documentary about the app,
Attended: All members of the group were present including supervisor Mark Wright.
Today we had a meeting with Dr. Matthew Aylett, the chief technical officer of CereProc. CereProc has developed an advanced text-to-speech technology that we hope to implement in our web-based application. Focusing on user experience, we want to offer an audible, hands-free feature for Literary High Street. Matthew gave us a brief introduction to CereProc and how we could best incorporate the technology within our application.
We decided that using the CereProc cloud-based API would be a great opportunity for both CereProc and the DMSP project. The API will allow us to manipulate voices depending on tones, stress, emotions, background music, and possibly 3D audio . Because the API is in the final stages of beta testing, our project could help CereProc test the API and provide user feedback.
Here you can try out a live demo of the text-to-speech technology. It’s pretty fun to mess around with!
Notes from the meeting:
Working to embed sound effects & music, mixing with other audio, whispers. Could build voices from audio books.
The emotion of voice and voice qualities are important to consider. Examples of voice qualities include: stressed vs. relaxed, repetition, synthesize multiple voices.
Consider voice as a sense of presence. One could argue that a neutral voice doesn’t sound ‘present’
Mimic spatial reverberation: using impulse response from environment.
What would be best for group to use for project? Cloud service API. The turn around text-to-speech time is very fast. Short paragraphs are preferable. The would also allow us to experiment with 3D audio.
Questions: How would using the same voice for different personalities affect the experience? Reading text vs. narration, how does this affect emersion? Imagination? What about tourists? Will there be a version in different languages?
A concern with using CereProc may be noise. Would the Royal Mile be too noisy for the user to hear?
The Mile doesn’t have a lot of traffic and we can position text near areas of less noise, etc.
The group has made great progress over the last few weeks. We have decided to move forward with our website prototype and have assigned group roles. The main focus for the next few weeks will be to complete the Literary High Street demo website, the bulk of our project. We will be creating a ‘mini’ version of the application to test user functionality and explore aspects of psychogeography. Interactive panoramas and maps will be central to the interface. Also, a few group members will be creating a short animation and/or video documenting our design process that will serve as documentation for the final submission.
Group Roles This Week:
Michelle & Yi: User Interface (HTML/CSS)
Jessica: Text & Map geolocations, Google Map API, panorama photo shoot