Saturday, December 11, 2010

Comments for Week 14

Comment 1

Comment 2

Week 14 Notes

“What Cloud Computing Really Means,” by Galen Gruman and “Explaining Cloud Computing,” YouTube Video
I never really knew what cloud computing was until this Gruman article and the YouTube video. In the video it defined cloud computing as where “data, software applications, or computer processing power are accessed from a ‘cloud’ of online resources.”  They both highlight that cloud computing is very inexpensive and will save companies and organizations from buying new infrastructure, software, and hiring new employees.  Both also explained the different forms of cloud computing such as SaaS, MSP, and HaaS.  I really enjoyed finding out that when I use Google applications (i.e. Google Docs) that I’m actually using a form of cloud computing.  I think it’s a great trend and I hope it continues, especially if they are more like Google docs that are able to allow me to share and create documents or spreadsheets with numerous people at once. 

“The Future of Libraries: Beginning the Great Transformation,” by Thomas Frey
I really enjoyed this article and I feel Frey gave an excellent overview of the history and trends that are affecting the future development of libraries.  I agree with all ten trends that Frey highlights, but my favorite were trend #1 where he list the different technologies/communication systems that people used to access information; starting with the telegraph in 1844 to podcasting in 2004 to showcase how communication systems are always evolving.  I also like trend #10 because I agree with Frey that if libraries can transition from centers of information to centers of culture then they can be more “tapped into the spirit of the community.”  Frey as well provides four recommendations for libraries concentrating on preserving community memories and embracing new technologies. 

Muddiest Point for Week 14

I have no muddiest point for this week. 

Friday, December 3, 2010

Week 13 Notes

No Place to Hide
On the website, the first thing I noticed was the statement right below the heading:

“When you go to work, stop at the store, fly in a plane, or surf the web, you are being watched. They know where you live, the value of your home, the names of your friends and family, in some cases even what you read. Where the data revolution meets the needs of national security, there is no place to hide. ”

The statement kind of freaked me out, probably because I know that it’s true.  I know I basically have very little privacy, especially dealing with technology and the issue of “national security.”  Not only can my every move be captured in every public store or setting I’m in by surveillance cameras, but now since most records are online, like bank statements and flight information, my personal information can be known too.  This topic is being investigated by No Place to Hide, which is a multimedia investigation led by Robert O’Harrow, Jr. and the Center for Investigative Reporting.  On the site, one can read the final chapter and reviews of the book, read interviews from significant figures such as John Ashcroft and Viet Dinh, and can find links to radio and television clippings about the investigation.  I enjoyed reading the last chapter of the book and it really made me believe that we no longer own the details of our lives, but that they belong to the “companies that collect them, and the government agencies that buy or demand them.”  However, remember it’s OK because it’s all in the name of keeping us safe.    

TIA and Data Mining
On this Electronic Privacy Information Center (EPIC) web page information was provided on the tracking system called Total Information Awareness (TIA), which was designed by the Defense Advanced Research Projects Agency (DARPA) in November of 2002.  Knowing nothing of the project, I learned that the goal of the TIA project was to develop data-mining tools that would sort through a database of records (medical, financial, travel, communication, etc.) of individuals, so that the government could track potential activity and catch potential terrorists and criminals.  This project was shut down though in September 2003 when Congress eliminated their funding.  However, projects like TIA still exist.  From the “Latest News” section of the site I learned that there are almost 200 data mining projects, either operating or in planning, through the federal government today.  Many make use of our personal data from private sector databases.  This really make me wonder how much of my right for privacy is being violated by the government? 

YouTube Video 

On YouTube it states that the video is no longer available due to a copyright claim by Viacom, therefore I cannot make any comments on the video.  Please let know if anyone has found it!

Muddiest Point for Week 13

I do not have a muddiest point for this week. 

Thursday, November 25, 2010

Week 12 Notes

Due to the holiday I will not be posting notes for this week. 

Muddiest Point for Week 12

My question is still the one I proposed within my notes from last week.  How can more of the deep Web content get to the surface Web?  Is it once someone makes a specific request and acquires content from the deep web it automatically becomes part of the surface web? Does that mean anyone could access that information now? 

Saturday, November 20, 2010

Week 11 Notes

"Web Search Engines: Part 1 and Part 2," by David Hawking
I felt like the information in this article went right over my head.   I just felt like I was not fully grasping the definitions and concepts of crawling and indexing.  Also, the graphs in the figures were not as helpful as I thought they were going to be.  What I did gain, if I understood it correctly, was that a good "seed" URL, such as Wikipedia, will link to numerous Web sites and these "seeds" are what initializes the crawler.  After, the crawler scans the content of this "seed" URL it will add any links to other URLs into the queue.  Additionally, it saves the Webpage content for indexing.  Then in Part 2 it goes on to explain indexing algorithms.  So, basically an inverted file, used by the search engines, is a concatenation (the operation of joining two character strings end-to-end) of the posting lists for each particular word or phrase.  The list contains all the ID numbers of the webpage documents that the word is in.  In the end, I enjoyed part 2 over part 1, but I honestly understood the simpler explanation from Wikipedia then I did with this article. 

"Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting," by Sarah L. Shreeves, et al. 
The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) was released in 2001 and is a "means to federate access to diverse e-print archives through metadata harvesting and aggregation."  Since its release a wide variety of communities had begun to use the protocol for their own specific needs.  In a 2003 study it was stated that over 300 active data providers from an array of institutions and domains were using OAI-PMH.  The article discusses the use of the protocol within these different communities as well as the challenges and future directions it faces.  My favorite part in the article were the three specific examples of communities using the protocol.  As a piano player, I was really interested in the Sheet Music Consortium, a collection of free digitized sheet music.  I am definitely intrigued to research more about it and to see how the search service progressed. 

"The Deep Web: Surfacing Hidden Value," by Michael K. Bergman
This article was the most fascinating to me this week.  I never knew there was a "deep Web" and that what we mostly view is just the "surface Web."  I was captivated that there was stored additional content on the Web, but could only be accessed by direct request.  It made me wonder how can more of the deep Web content get to the surface Web?  I also enjoyed the study performed by BrightPlanet where they used their exclusive and unique search technology to quantify the size and importance of deep Web material.  I was most surprised by the finding that the deep Web is 400 to 550 times larger than the WWW (surface Web).  Also, the finding that the "total quality content of the deep Web is 1,000 to 2,000 times greater than that of the surface web" was surprising. 

Comments for Week 11

Comment 1

Comment 2

Friday, November 19, 2010

Muddiest Point for Week 11

I support the inclusion of an institutional repository within a digital library, but I know it's not required.  Within the field, what seems to be the most popular decision; to include the repository or to disregard it due to its high costs? 

Saturday, November 13, 2010

Week 10 Notes

“Digital Libraries: Challenges and Influential Work,” by William H. Mischo
This article provided background information on the evolution of digital library technologies, a lot of which I knew nothing about.  I never knew that most of the research and projects were federally funded and were university-led.  There were six university-led projects, all focusing on different aspects of digital library research.  I found the University of Illinois at Urbana-Champaign DLI-1 project the most interesting (and the project I would have most liked to work on), for they researched the “development of document, representation, processing, indexing, search and discovery, and delivery and rendering protocols for full-text journals.”  I also enjoyed that the article highlighted the actual achievements that were born from these projects.  For example, Google grew from the Stanford DLI-1 project and the Cornell University & UK ePrint collaboration DLI-2 project contributed to the foundation of the Open Archives Initiative for Metadata Harvesting (OAI-PMH).  It was nice to see that these government funded projects led to successful and global programs/corporations. 

“Dewey Meets Turing: Librarians, Computer Scientists and the Digital Libraries Initiative,” by Andreas Paepcke and et al.
This article discussed the collaboration between librarians and computer scientists dealing with the Digital Library Initiative (DLI).  It was appealing to learn about the affect the World Wide Web had on both disciplines and the DLI.  I learned that it was more difficult for librarians to integrate the Web then it was for computer scientists.  Computer scientists were thrilled to research and incorporate subdisciplines of computer science such as machine learning and statistical approaches into their work, while librarians felt the Web was threatening traditional pillars of librarianship, such as the reference interview.   The article also stated that the Web affected both communities by turning the retrieval of information into a more laissez-faire culture.  Another interesting point was the conflict between the two disciplines, for librarians felt computer scientists were “stealing” their money that should have been going into collection development and computer scientists were frustrated with librarians’ emphasis and wariness of metadata.  In the end, I felt the article was trying to suggest that the two needed to find a common ground on how to work together. 

“Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age,” by Clifford A. Lynch
In this information packed article, Lynch discusses the definition, the importance, the cautions, the benefits, and the future developments of institutional repositories, specifically university-based repositories.  He first makes a point to state that a university-based institutional repository is defined as a “set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members.”  I support his opinion that a repository must contain both faculty and students’ research and teaching materials along with documents about the institution’s past and recent events and performances of intellectual life.  He goes on to discuss that faculty must take the lead in adopting this new form of scholarly communication and they must make the shift in using this information network full of new distribution capabilities.  I also agreed with his three cautions about institutional repositories.  He argued that the institution should not try to assert control or ownership over works through the repository, to not overload the infrastructure with irrelevant policies, and lastly to not make light of the seriousness and importance of the repository to the community and to the scholarly world. My favorite recommendation of Lynch’s was his opinion for the extension from institutional repositories to community and public repositories.  I believe this is a brilliant idea, and if accomplished, could lead to a wonderful collaboration between societal institutions, government, and members of the local community. 

Comments for Week 10

Comment 1

Comment 2

Friday, November 12, 2010

Muddiest Point for Week 10

I am still confused over child elements.  Can you only assign a single letter to represent the child element or can you assign a single word? Can you assign child elements to all elements in your document?  Is it really necessary to include child elements? 

Saturday, November 6, 2010

Comments for Week 9

Comment 1

Comment 2

Week 9 Notes

“An Introduction to the Extensible Markup Language (XML)” by Martin Bryan
“Extending Your Markup: An XML Tutorial” by Andre Bergholz
After reading these two articles I began to learn what XML is and how it works.  However, I do feel a little shaky on all the specific and intricate details of it.  What I do understand is that XML is a language that lets you meaningfully annotate text, unlike HTML.  It does not have a single standardized way of coding text and it does not have predefined set of tags. Through Document Type Definition (DTD), the component that defines structure within the XML document or as Bergholz put it a “context-free grammar,” DTD allows users to choose their own tags, elements, and attributes.  This freedom to choose and create the structural aspects your own way is a wonderful benefit of XML.  Since, XML descriptions are structure orientated rather than HTML’s layout orientation, I believe XML is easier to write and comprehend. 

“A Survey of XML Standards: Part 1” by Uche Ogbuji
In the article, Ogbuji discusses the most important XML technologies, or as he puts it, standards.  For technologies to become standards they must be notably adopted by an array of vendors or respected organizations.  The article pointed out that most standards stemmed from W3C recommendations or from the International Organization for Standardization (ISO) and Organization for the Advancement of Structured Information Standards (OASIS).  Ogbuji listed some very interesting standards and the best part was that he included links for tutorials and other resources that would be useful in understanding each standard. I really enjoyed learning about the standards for XML Schema language like RELAX NG, the Schematron Assertion Language 1.5, and W3C XML Schema. 

XML Schema Tutorial
From what was stated in the Bergholz article, XML Schema is like DTD, for it defines “a grammar” for the document, but it’s more expressive and uses XML syntax.  In the tutorial they provided a simple list of what XML Schema is.  They stated that the main purpose of an XML Schema is to define the elements, the attributes, which elements are child elements, the order & number of elements, and datatypes & values for each element or attribute.  This tutorial was a lot easier to understand than the first two articles in their Schema explanations.  Also, the examples were extremely helpful and helped me in grasping the basic concepts. 

Friday, October 29, 2010

Comments for Week 8

Comment 1

Comment 2

Week 8 Notes

HTML Tutorial
This website tutorial was an extremely helpful guide in basic html construction.  Before this class/reading I have never even thought about trying to learn how to create an html document (web page).  I thought it would be complicated and in a language so foreign to me that I would never understand it.  However, with this tutorial learning the basics wasn’t so hard.  I mean I am not a master in it, but the “try it yourself” examples really helped me in grasping the beginning knowledge of html headings, paragraphs, formatting, styles, and etc.  

HTML Cheatsheet
This website didn’t provide as much detail or explanation as the tutorial, but it will be a great handy guide for the construction of an html document.  I felt that the cheatsheet wasn’t really to educate you on how to apply the correct html tag or syntax, but more of a list that would be helpful to print out and have next to you so that you can quickly find the syntax you need. 

CSS Tutorial
Similar to the html tutorial, this website tutorial teaches you how to construct cascading style sheets, which is important because you need to create a CSS to display the html elements in your webpage.  It provided the basic information, with the same “try it yourself” examples of CSS styling backgrounds, text, fonts, links, tables, and etc.  However, what I found the most helpful was the CSS Demo where you can “see how it works.”  In the demo you can view how a webpage would look in three different styles and then they provide the stylesheet for each style you saw.  It was nice to match up the tags and proper syntax to how it would actually look on the document/webpage. 

“Beyond HTML: Developing and Re-Imagining Library Web Guides in a Content Management System” by Doug Goans, Guy Leach, and Teri M. Vogel
This article studied and reported on the new content management system that was designed for the Georgia State University Library to manage their numerous web-based research guides.   I feel that the most important information of the article was how the collaboration of the web development personnel and the liaison librarians were the reason of the success to the change and construction of the CMS.  This case study is a great reference in to see who to consult and how to go about trying to switch systems and to provide the most content-rich guides for their patrons. 

Thursday, October 28, 2010

Muddiest Point for Week 8

With Google possessing the majority percentage (83.34%) of market shares for search engines, do you think that they would ever take over the whole market?  Is that a plausible thought that it could happen? 

Thursday, October 21, 2010

Assignment 4

Here is my link:

http://www.citeulike.org/user/fboretzky

Comments for Week 7

http://elviaarroyo.blogspot.com/2010/10/week-7-internet-and-www-technologies.html?showComment=1287683808688#c3145034976514121313

http://jonas4444.blogspot.com/2010/10/reading-notes-for-week-7.html?showComment=1287684672363#c5240310989678980965

Week 7 Notes

Jeff Tyson “How Internet Infrastructure Works”
“The Internet is simply a network of networks.”  I love this quote and feel that it sums up the internet perfectly.  This article was an excellent guide in educating one about how internet infrastructure works; from basic structure to how information is sent to connection among networks/computers.  I found it interesting to learn about how different companies can become part of one network through Network Access Points (NAP) and how every machine has an IP address, which could then be given a domain name.  The examples provided within the article were also of great help in understanding the specific details. 

Andrew K. Pace “Dismantling Integrated Library Systems”
After looking up the definition and becoming familiar with all that entails with an Integrated Library System (ILS) I found the article very interesting.   I enjoyed the statement near the end of the article that discusses the two different choices a library vendor has; either to maintain the existing large system or to “dismantle” the existing module and integrate the system through web services.  I agree with the latter version, for I feel that mixing the new with the old is not only inevitable with the advances on technology, but would be a better system for the future. 

Sergey Brin and Larry Page Google Video
In the beginning of the video where they show how much activity on Google is happening around the world, really makes you think of how much a powerhouse Google really is.  I mean I just think of how much I use and rely on Google.  Not only is it my homepage, but I use gmail, gchat, Google docs, and for everyday random searches.  Don’t get me wrong, I love Google.  Especially, their changing logo, which I feel brings to light cultural and social events/people/dates that people might forget about.  For example, today, October 21st, they honor jazz great Dizzy Gillespie with their logo because today is his birthday.  Also, in the video I really enjoyed how insider information was told about the company.  I never knew about all the charities they were involved with and how they allow their workers to work on anything they want 20% of the time.  I think that’s an amazing concept to promote creativity and innovation.  Lastly, it was nice to finally see what the founders looked like, especially after reading and hearing about them for so long. 

Muddiest Point for Week 6

I have no muddiest point for this week. 

Saturday, October 9, 2010

Comments for Week 6

http://maj66.blogspot.com/2010/10/rfid.html?showComment=1286667355027#c5471502929754964259

http://annebetz-lis2600.blogspot.com/2010/10/week-6-readings.html?showComment=1286668032026#c3513074339742076581

Week 6 Notes

Local Area Network (Wikipedia)
The Wikipedia article was a great general explanation of a local area network (LAN).  It was helpful in describing the use of, the history, and the technical aspects of a LAN.  I learned that LAN was more for smaller, limited areas such as your home or school or office building (one or a couple in close proximity of each other).  It was interesting to read that LAN was only really established in the late 1970s, so therefore has only been around 35 to 40 years of existence.  Also, it was explained which cables were used for connection and how Ethernet and Wi-Fi are the most popular now, over twisted pair cable.  Since, I understood all the terminology, the article was very easy to follow and a nice read. 

Computer Network (Wikipedia)
A computer network help facilitates communication among users.  In the Wikipedia article the numerous components of a computer network is explained.  Firstly, the article describes the four purposes of computer networks, which are communication, sharing hardware, sharing files, data and information, and to share software.  Then connection methods and types of networks follow.  Learning about the different types, was the most enjoyable aspect of the article.  From LAN to GAN I finally understood the range of each type and learned interesting facts such as the internet is an example of a Global Area Network and a Wide Area Network spans not only a city, but could span a country or even intercontinental distances.  Lastly, hardware components such as repeaters, hubs, and routers were described.  The article in a whole was very detailed and helpful in expanding my knowledge on terms I was familiar with. 

Common Types of Computer Networks (YouTube Video)
The YouTube video by Relativity, narrated by CEO Frank J. Klein, was a wonderful short quick summery of the types of computer networks.  I wouldn’t recommend the video if you are trying to learn in depth details of the different types, but it was nice to learn by just listening rather than reading. 

“Management of RFID in Libraries” by Karen Coyle (Journal of Academic Librarianship)
This journal article was an educational summary of the definition of RFID and how this technology can affect and impact libraries.  RFID is a radio frequency identifier that consists of a computer chip that can be attached or printed on paper.  It is similar to a barcode, but uses an electro-magnetic field to be read.  It can also contain more information than a barcode, which is a benefit within the library system.  With supplying more information on the book that is attached to the RFID tag, the library can acquire more knowledge on the circulation of the book.  Also, the benefits of using an RFID tags are the advancement in inventory tracking, an anti-theft security mechanism, and the ability to check out a stack of books at once.  However, with these benefits also come disadvantages.  Coyle reveals that one can dupe the RFID tag by placing a piece of aluminum over it or by simply removing it, since most tags are attached just on the inside of the book.  Also, with this new technology and the invention of self-checkout, there will be a decrease in the need for a circulation staff.  It is hard to deny the pros to RFID tags, but I would never agree with the cut of staff or jobs.  I still feel that man to man contact is needed and important to library checkout, just for the fact that if the patron needs to ask a question, a machine cannot verbalize or explain the answer. 

Friday, October 8, 2010

Muddiest Point for Week 5

How much information or data has to be involved to consider it a database?  Are there requirements or limits to the amount? For example, my own personal excel spreadsheet how much data would have to be listed for it to be considered a database?

Saturday, October 2, 2010

Week 5 Notes and Comments and Muddiest Point for Week 4

I will not be making posts this week and will like to have this week not count towards the ten that I need.

Sunday, September 26, 2010

Saturday, September 25, 2010

Comments for Week 4

http://cloderlis2600.blogspot.com/2010/09/week-4-reading-notes.html?showComment=1285442846071#c6231448890981081475

http://lis2600racheln.blogspot.com/2010/09/unit-four-reading-notes.html?showComment=1285443390283#c2852691751043557687

Week 4 Notes

Data Compression (Wikipedia & DVD-HQ website)
I never really knew much about data compression.  Yes, I could probably conclude what the overall goal was, but I did not know the different applications or methods.  From the Wikipedia website I learned that there were two different methods, lossless and lossy compressions.  Lossless seems like the best option because the data is exactly the same after decompression, where with lossy you lose some data indefinitely.  Also, I learned that lossy is used more for audio, video and digital images compressions, while lossless is used more for text, spreadsheets, and programs.  The DVD-HQ website was more helpful in diving deeper into the explanations of data compression.  It explained the basics in more details and helped me to understand all the acronyms better such as RLE and LZ.  In addition, the examples and images on the DVD-HQ website made it easier to follow the explanations, especially with quantisation and motion compensation.  I feel that data compression is a great way to save space and valuable resources, but I think that the time and extra processing needed for decompression is a major difficulty.

Imaging Pittsburgh: Creating a Shared Gateway to Digital Image Collections of the Pittsburgh Region by Edward A. Galloway
This article was the most interesting to me this week.  As a Pittsburgh native I was excited to hear that there was a digital collection of photographs of my hometown's past.  I actually visited the website (http://digital.library.pitt.edu/pittsburgh/) and explored through the images.  I really enjoyed that you could search by region, which allowed me to view the specific area, through images, of where my family lived, dating back to when my grandparents settled in Pittsburgh. My favorite aspect that website provides is the ability to order reproductions.  It would allow visitors to have a copy of the images for their personal collection.  The article was also helpful in showcasing all the challenges that arise in forming a digital collection with numerous institutions.  It really made me think of all aspects that have to be considered such as selection process, website development, and communication among the different establishments. 

YouTube and Libraries: It Could be a Beautiful Relationship by Paula L. Webb
After reading the article, I think that using one of the most popular and visited websites would be extremely beneficial in helping library patrons.  I believe that students would highly enjoy these videos and learning visually about the library services, I know I would.  Also, the examples given of the different YouTube videos provided by universities, such as tutorials, tours through the library, and introductions to resources, highlight just the beginning of what could be accomplished through this partnership. 

Friday, September 24, 2010

Week 3 Muddiest Point

In concerns about Unix and Linux, what exactly is high level program language?  What makes it high level?  Is it the information being computed? What jobs or professionals would use high level computing? 

Saturday, September 18, 2010

Week 3 Notes

If you would of asked me to provide the definition of Linux, Mac OS X, or Windows Vista was before this required reading all I would have been able to tell you was that they had something to do with computers.  Now I can at least tell you that each are operating systems that set up processes that except, manipulate, store, and retrieve data.  They provide security, firewalls, and make systems compatible to other operating systems.  In comparing all three, it seems that Mac OS X is the best for graphical display and presentation and has the most features like Aqua GUI and drop shadows.  With Windows Vista it seems that there have been major advancements in graphical display to compete with Macs.  I'm excited to experience Windows Vista first hand, since my new laptop, that's in the mail, has Windows 7.  I still feel a little lost in the full understanding of each, but hopefully through in class presentation and discussion I'll have a better understanding. 


I know our posts are due by Saturday evening, but I did not know by what time.  If my post is too late I would like it to not count as one of my ten required posts. 

Comment for Week 3

Here is the link for my comment on Aimee's Week 2 Notes:

http://acovel.blogspot.com/2010/09/week-2-computer-history-museum.html?showComment=1284853901104#c2883999028794833531

Friday, September 17, 2010

Friday, September 10, 2010

Week 2 Notes

Personal Computer Hardware (Wikipedia):  This website was extremely helpful in my understanding of computer hardware terminology. When it comes to computers I'm pretty naive in hardware set up and usually take it to a store to get fixed, but at least now when they tell me the problems I'll understand them. I enjoyed learning about the storage capacity of the removable media devices. I never knew a Blu-Ray disc held seventy times as much as a CD. Also, the mention of the floppy disc brought me back to high school and really made me think of how technology has advanced so rapidly.

Moore's Law (Wikipedia & Video):  Moore's Law has truly shown how technology has grown. His prediction, made forty five years ago, that the number of transistors would double about every two years has been spot on. It makes me wonder how technological gadgets will be like in 2015? Also, I found it interesting how companies have spent a mass amount of effort and money to keep up with Moore's Law and how it became an overall goal to the industry.

The Computer History Museum:  As a museum addict, exploring an unfamiliar museum website was an absolute pleasure. The exhibitions were all fascinating and nicely laid out. I spent most of the time exploring the Visible Storage Exhibition, which provided highlights from the collection. It was fun to see the photographs of the different machines and computers from the past. It really shows how far we have come. Additionally, I enjoyed the history timeline and learning about the major companies, devices, and people that effected our culture. 

Friday, September 3, 2010

Week 1 Muddiest Point

I am still confused in when all our postings are due.  So the notes are due the Saturday before class and the muddiest points are due the Friday after class?  When are we suppose to comment on other student's blogs?  Do we comment directly onto their blog? 

Week 1 Notes

OCLC Report: 2004 Information Format Trends: Content, Not Containers
Society is changing in how content/information is being created, used, or delivered. The invention of smartphones and other devices have changed how people internationally gain their information. The report emphasizes that libraries must pay attention to this change. They also must get involved in blogs or wikis to connect more to the community. In addition, the report displays the decrease in new print book sales from 2002 to 2003, which is just one of the reasons why libraries must acknowledge the change to a more digitized form of information. The primary argument I took from this article was that libraries must find a way to supply content to mobile devices to stay connected to the technological self-sufficient generation.


Lied Library @ Four Years: Technology Never Stands Still by Jason Vaughan
In the case study Vaughan highlights the challenges that arise with constantly changing technology in an academic library setting. The most useful information from the study was the details in how Lied Library reacted to the challenge/problem and in the end how it was fixed or handled. He showcased that one must deal with all technological changes including space management and security rather than just be concerned with new system updates. Vaughan's examples illustrated how a library must be continuously evolving technologically to provide a smooth and pleasing experience for their students. Also, I found interesting was how wireless networking was just being implemented in 2004, when it is such a norm on universities nowadays.


Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture by Clifford Lynch
To be literate in information, one must be literate in information technology. Lynch presents the connection and emphasizes how IT influences and molds how information is accessed and delivered. He also explains why having this IT knowledge is useful for "everyday" civilians. The knowledge will not only help one in career opportunities, but shape one into an informed citizen in society. In relation to information, Lynch stresses that to be literate you must understand all multimedia genres in the communication world rather than just thinking information is simply in text.