Friday, March 30, 2012

Lab 11

Google:
Screen shot of initial search terms:
http://screencast.com/t/Xl7by6kMy3it

Virtual AND reference; "digital library"; published between 2008 - 2012

This search yielded 8,000+ results. I added "trends" as a search term and narrowed the field to 2,770 results.

Screenshot of results:
http://screencast.com/t/Tz6E5MRvzf

Web of Knowledge:

Final search terms:
"Digital Library" AND "Remote Reference" AND trends OR "virtual reference". Limited to 2008 - 2012. Narrowed to subject area - Information Science Library Science.

Returned 55 results.

Screenshot of results:
http://screencast.com/t/AG8BjbIXkO

Lab 10

In class. Fast track weekend.

Week 12 Reading Notes

Web Search Engines, Part 1 & 2 [David Hawking]
- The fact that in 15 years the amount of data that must be indexed by search engines has grown by so much is astonishing. The quality of responses to query's that could have so many different meanings is astonishing.
- I don't think that most people understand what happens on the "inside" when they type something into the Google search bar. The algorithm that mines data is so cool!  I had no idea what happens in terms of excluded and duplicate content.
- For example, the way that I accessed the articles was done through  search engine. The links provided on courseweb didn't show the full text of the article, so I entered the author name and title of the article into a search bar to come up with the article text.

The Deep Web: Surfacing Hidden Value [Michael Bergman]

- I love the comparison of the web to the deep ocean. So much information is buried deeply and we don't know it exists. I also wonder if it matters that we don't know? Is it such specialized content that we don't need to know about it, or would all of our society benefit from mining the deep web?
- On the statistics - 550 billion documents in the deep web compared with the 1 billion of the surface web? What will we do with all of that information?
- "95 percent of the deep web is publicly accessible" - what can we do with this information? If, as the article suggests, it is of a higher quality, more niche, more specific, and not subject to restrictions, shouldn't we be using it more?
- Interesting to see what some of the most trafficked deep web sites are in terms of being freely accessible. Some, like JSTOR, i use frequently.
- This article is 10 years old. I wonder how much the statistics have changed since then?

Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting
- This article was hard for me to understand because I know very little about the Open Archives Initiative.
-  Reading about the different initiatives, like the OLAC and the Sheet Music Consortium was interesting. I actually used the Sheet Music Consortium on a different project.
- Again, I am so impressed and interested in the work that these folks are doing. My brain does not work in a way that easily understands what these folks are doing and what they want to achieve. I am much more people oriented in my pursuit of librarianship. I'm glad that there are people who can pay attention to these parts of the bigger picture!

Saturday, March 17, 2012

Week 10 Reading Notes

Introduction to XML
- The connections between XLM and html are so connected!
- Difference between DISPLAYING information in a way the machine can read versus displaying information in a way the machine can understand.
- Tags, elements, attributes that seem to nest together.
- I had no idea how XML has transformed the way we we search and how websites streamline the process of of searching for information.
-- XML is STRICT! The rules a code writer must follow seem far more strict than html. When I war writing some of our html, I found it very easy to skip an end tag, but still wind up with what I was after. I'm excited to learn more about the code writing for XML.

A Survey of XML Standards: PART 1
- I feel like - as with the css and html modules - I will understand what each of these items are for. Right now, XML Base, XInclude, XPath, etc will make more sense when we come back and begin experimenting with mmaking simple XML pages. This is a great resource page, though, and I have it bookmarked.

W3 XML Scheme Tutorial
-Yay! I love the W3 articles we've used for html and css.
- The "why use XML Schema" was very helpful. It seems like XML is really the most useful for websites with the direct need for input-able data, or websites that need to link and use data from databases.
- I'm looking forward to using this XML language and eventually learning more about
xhtml as a language.

Friday, March 9, 2012

LAB 8

Link to revised .css document:

http://screencast.com/t/3oiOvSVB

Link to updated page:

http://www.pitt.edu/~hmd9/index2.html

Week 9 Reading Notes


 
HTML5 Tutorial

- Very understandable that advances in technology call for a new language. The new features - like <canvas, <video>, etc, are what make part of web 2.0 possible. 
- Good to know that this new language is what supports most video function in browsers rather than a plugin. It's so annoying when plugins crash! 
- Not sure that I understand how "drag and drop" really works yet...
- It seems like it's safe to assume, especially given the geolocation function and caches, that HTML5 is behind most "advanced" - read snazzy - websites. 

Wikipedia: HTML5 
- When reading the first tutorial, I wondered if HTML5 was meant to run on smartphones and tablets. Good to know. 
- As an undergraduate student, tenth graders in my practicum class used Flash to illustrate a short story. I remember how cumbersome it was to teach and to use - I can't imagine what it must be like as a web page development tool. And now...it's gone! No longer the standard! 
- What happens when Recommendation is released? Will there every be a point when there is no longer interoperation between languages, or that the possibilities of the new language are compromised because of attempts to remain interoperable?