Sunday, November 16, 2008

week 11 comments

https://www.blogger.com/comment.g?blogID=301150766198525940&postID=5836373957895676096&page=1

https://www.blogger.com/comment.g?blogID=1057727177405306622&postID=1506011019081134724&page=1

Friday, November 14, 2008

week 11 readings

Digital Libraries Challenges and Influential Work
To effectively search all digital resources over the internet remains a problem filled and challenging task. People have been working to turn the vast amount of digital collections into true digital libraries. "Federal programmatic support for digital library research was formulated in a series of community-based planning workshops sponsored by the National Science Foundation (NSF) in 1993-1994." "The first significant federal investment in digital library research came in 1994 with the funding of six projects under the auspices of the Digital Libraries Initiative (now called DLI-1) program" After the DLI-1 They created a DLI-2. "In aggregate, between 1994 and 1999, a total of $68 million in federal research grants were awarded under DLI-1 and DLI-2." "DLI-1 funded six university-led projects to develop and implement computing and networking technologies that could make large-scale electronic test collections accessible and interoperable." Several of the projects examined issues connected with federation. "There has been a surge of interest in metasearch or federated search technologies by vendors, information content providers, and portal developers. These metasearch systems employ aggregated search (collocating content within one search engine) or broadcast searching against remote resources as mechanisms for distributed resource retrieval. Google, Google Scholar and OAI search services typify the aggregated or harvested approach."

Dewey Meets Turing Libraries, Computer Scientists, and the digital libraries
The google search engine emerged from the funded work of the DLI. An interesting aspect of the DLI was how it united librarians and computer scientist. "For computer scientists NSF's DL Initiative provided a framework for exciting new work that was to be informed by the centuries-old discipline and values of librarianship" "For librarians the new Initiative was promising from two perspectives. They had observed over the years that the natural sciences were beneficiaries of large grants, while library operations were much more difficult to fund and maintain. The Initiative would finally be a conduit for much needed funds." "...the Initiative understood that information technologies were indeed important to ensure libraries' continued impact on scholarly work." "The Web's advent significantly changed many plans. The new phenomenon's rapid spread propelled computer scientists and libraries into unforeseen directions. Both partners suddenly had a somewhat undisciplined teenager on their hands without the benefit of prior toddler-level co-parenting." "The Web not only blurred the distinction between consumers and producers of information, but it dispersed most items that in the aggregate should have been collections across the world and under diverse ownership. This change undermined the common ground that had brought the two disciplines together."

Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age
"The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage..." Many things have made this possible such as the price of online storage costs dropping and the Open archives metadata harvesting protocol. "The leadership of the Massachusetts Institute of Technology (MIT) in the development and deployment of the DSpace institutional repository system http://www.dspace.org/, created in collaboration with the Hewlett Packard Corporation, has been a model pointing the way forward for many other universities." "... a university-based institutional repository is a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members." "At the most basic and fundamental level, an institutional repository is a recognition that the intellectual life and scholarship of our universities will increasingly be represented, documented, and shared in digital form." The author includes another use for IR's and some cautions as well. ". I have argued that research libraries must establish new collection development strategies for the digital world, taking stewardship responsibility for content that will be of future scholarly importance..." The article ends by mentioning the future of IR.

Thursday, November 13, 2008

muddiest point week 11

My muddiest point for this week is do we only need 10 weeks worth of comments, muddiest points, and comments or not? I remember hearing that I think, but I am not 100% sure and want to make sure I do not miss any assignments.

Monday, November 10, 2008

assignment

I had trouble using the ftp and posting to Pitt I spent hours frustrated trying to get it to work. I used another software to upload the page to, but still followed all the guidelines. Here is my site:http://www.freewebs.com/karategirl611/

Sunday, November 9, 2008

week 10 comments

Here are my comments for the week:https://www.blogger.com/comment.g?blogID=633484337573796975&postID=4757088958536674311&page=1
https://www.blogger.com/comment.g?blogID=4736393327020365268&postID=8240492140679815932&page=1

Tuesday, November 4, 2008

week 10 reading notes

Web Search Engine Part 1
It was believed in 1995 that indexing the web couldn't be done due to the exponential growth of the web. In order to provide a the most useful and cost effective services search engines must reject as much low-value automated content as possible. Also, web search engines have no access to restricted content. Large search engines operate multiple geographically distributed data centers. Then within a data center services are built up from clusters of commodity PC's. The current amount of web data that search engines crawl and index is about 400 terabytes. A simple crawling algorithm uses a queue of URLs yet to be visited and a fast mechanism if it has already been seen as a URL. A crawler works by making an HTTP request to get the page at the first URL in the queue when it gets that page it scans the content for links to other URLs ans uses each unseen URL to the queue. A simple crawling algorithm must be extended to address the following issues speed, politeness, excluded content, duplicate content, and continuous crawling, and spam rejection. As can be seen crawlers are highly complex systems.

Web Search Engines Part 2
Search engines are using an inverted file to rapidly identify indexing terms. An indexer can create an inverted file in two phases. First is scanning. Scanning is when the indexer scans the text of each input documents. For an indexable term it encounters the indexer writes a posting consisting of a document number and a term number to a temporary file. Second is inversion. In inversion the indexer sorts the temporary files into term number order and it also records the starting point and length of the lists for each entry in the term dictionary.

The Deep Web Hidden Value
The deep web refers to information that is hidden on the web and wont come up in a search engine. This is because most of the web information is buried far down on dynamically generated sites and therefore standard search engines never find it. The deep web has public information that is 400 to 550 times larger than the commonly defined world wide web. Search engines get there listings to ways. One is when someone gives them a site and the second was mentioned in the other article and that was crawling. Crawling can retrieve too many results. The goal of the study was to
- quantify the size and importance of the deep web
- characterize the deep webs content, quality and relevance to info seekers
- begin the process of educating the internet searching public about this heretofore hidden and valuable information storehouse
This study did not investigate non web sources or private intranet information. They then came up with a common denominator for size comparisons. All of the retrieval, aggregation and characterizations in the study used bright planet technology. When the analyzed deep web sites it involved a number of discrete tasks such as qualification as a deep website. We see from results that deep web sites that exists cover a wide range of topics. The article also mentions the differences between deep websites. The deep web is 500 times larger than the surface web. It is possible for deep web information to surface and for surface web information to remain hidden.

Current Development and Future Trends for the OAI protocol for Metadata Harvesting
The OAI-PMH was initially developed as a means to federate access to diverse e-print archives through metadata harvesting and aggregation. This OAI-PMH has demonstrated its potential usefulness to a broad range of communities. There are over 300 active data provider using it. One notable use of it is the Open Language Archives Community. Their mission is to create a "worldwide virtual library of language resources." There are registries of OAI repositories. They have two shortcomings. One they maintain very sparse records about individual repositories and two they lack completeness. The UIUC research group built the Experimental OAI registry to address the computer. The registry is now fully operational but there remains many improvements the group would like to make to increase its usefulness. There today remain ongoing challenges for the OAI community such as metadata variations, and metadata formats.

muddiest point

My muddiest point has to do with writing XML. In one slide of the powerpoint it was mentioned that: and

were the same. I do not understand why though?

Friday, October 24, 2008

week 9 comments

I commented here:
https://www.blogger.com/comment.g?blogID=7391116961538719622&postID=2330166871317716662&page=1
and here:
https://www.blogger.com/comment.g?blogID=5586031599791302355&postID=5941816636844715535&page=1

Wednesday, October 22, 2008

week 9 readings

All of the readings had mostly the same things to say about XML. Seeing as i knew nothing about XMl these readings were informative.
An Introduction to the Extensible Mark up Language
XMl is a subset of the SGML defined in ISO standard 8879:1986 that is designed to make it easy to interchange structured documents over the internet. XML can check that each component of a document occurs in a valid place within the interchanged data stream. XML does not require the presence of a DTD. The article goes on to list was XML is and is not. XML is based on the components of documents composed as a series of entities. XMl clearly sets out to identify the boundaries of every part of the document. Systems that understand XMl can provide users with lists of elements that are valid at each point in the document, and will automatically add the required delimiters to the name to produce a mark up tag. The article goes on to say how to define set tags, defining attributes, and incorporating standard and non standard text elements. With XML you can also have illustrations and tables. The article had a lot to say and went into detail I understood some of it, but having no experience with XML left me a bit confused.

A Survey of XML standards part1
The article is said to provide a summary of the most important XML technologies and and discuss how they fit into the greater scope of things in the XML world. First he mentions XML catalogs which define a format for instructions on an XML processor resolves XML entity identifiers into actual documents. Next is XML namescapes which provide mechanisms for universal naming of elements and attributes in XML documents. Third is XML base that provides a means of associating XML elements with urls in order to more precisely specify how relative urls are resolved in relevant XML processing actions. Xinclude is said to provide a system for merging XML documents and is in development. XML Information set defines an abstract way to describe XML document as a series of objects called information sets. Canonical XML is a standard method for generating a physical representation of an XML document. The article mentions a few more such as XML path language, X pointer, X link, and RELAX NG. This article was informative as I did not know that these things existed before I read this article.

Extending Your Markup
According to the article a well formated XML document starts with a prolog and contains exactly one element. DTd are used to define the structure of XML documents. The article gets into a lot of the technical details of each thing such as what they would look like in XML. Elements can be either nonterminal or terminal. Elements can also have zero or more attributes. Attributes can have different data types. Extensions to XML can include namespaces as well as more powerful adressing and linking capabilities. Namespaces are used to avoid name clashes. XML has three supporting languages that were mentioned in the article before. 1. X link 2. X pointer 3. X path The authors mentions XSL The extensible stylesheet language.

XML Schema Tutorial
XML Schema is an XML-based alternative to DTDs. The schema descirbes the sturcture of the XMl document. The article mentions that An XML Schema:

defines elements that can appear in a document
defines attributes that can appear in a document
defines which elements are child elements
defines the order of child elements
defines the number of child elements
defines whether an element is empty or can include text
defines data types for elements and attributes
defines default and fixed values for elements and attribute

XML schema supports data types and uses XML syntax. It also secures data communication. They give lots of examples and the site easy to navigate because you can go back and forth easily and easily click on the section you want to read.
The article also mentions that XML Schema has a lot of built-in data types. The most common types are:

xs:string
xs:decimal
xs:integer
xs:boolean
xs:date
xs:time

The article goes into detail on simple and complex data types. It provides a lot of info for people who need to learn about XML schema's.

muddiest point

my muddiest point is the HTML. It seems pretty straightforward and I hope it is. I am just worried that once I start the final assignment things won't go so smoothly.

Tuesday, October 21, 2008

koha

here is the link to my shelf. http://pitt4.kohawc.liblime.com/cgi-bin/koha/bookshelves/shelves.pl

Friday, October 10, 2008

week 8 comments

Here are my comments https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=6842639853632380331&page=1

https://courseweb.pitt.edu/webapps/portal/frameset.jsp?tab_id=_2_1&url=%2Fwebapps%2Fblackboard%2Fexecute%2Flauncher%3Ftype%3DCourse%26id%3D_9047_1%26url%3D

muddiest point

I am unsure as to whether we have to be here for fast track weekend or not. It has been mentioned in my other classes that I have to be there, but this is the first i heard mention of it in this class.

Wednesday, October 8, 2008

week 8 readings

W3 Schools HTML Tutorial
This site was very informational to me a Iknew nothing about HTML before reading it. HTMl stands for hyper text markup language. A HTML file is then a text file containing small mark up tags. These mark up tags tell the web browser how to display the page. HTML tags come in pairs and are not case sensitive. The aritcle showed me a lot of htl tags such as how to make text bold or italic and how designate certain things as headings. For example

is how one would make a paragraph in HTML.

HTML cheat sheat (web monkey)
This website provided mainly the same over view as the first site did. It went into a lot of detail on how to do different body attribute such as set the backgriund and text color. For example would create bold text and would create italic text.

W3 School Cascading Style Sheet Tutorial
I had not heard of CSS before reading this article so this was all new to me. Styles define how to display HTML elements. Styles are normally stored in style sheets. Styles solve a problem. "As the two major browsers - Netscape and Internet Explorer - continued to add new HTML tags and attributes (like the tag and the color attribute) to the original HTML specification, it became more and more difficult to create Web sites where the content of HTML documents was clearly separated from the document's presentation layout.
To solve this problem, the World Wide Web Consortium (W3C) - the non profit, standard setting consortium, responsible for standardizing HTML - created STYLES in addition to HTML 4.0" (W3 Schools CSS Tutorial). CSS sybtax is made of of 3 parts: selector, property, and vlaue. Selector is normally the HTML element/tag that one wishes to describe. The prpoerty is the attribute you wish to change and each property can take a value. The propet and value are seperatd by a colon and surrounded by curly brackets. You can group selectors. With the class selector you can define different styles for the same type of HTML element. You can also define styles for HTML elements with the id selector.

Beyond HTML
This was a rather long article about the Georgia State University Library. The purppse of the article was to" report on the content management system desing to manage the 30 web-based research guides developed by the subject liasion libraries at the Georgia State University Library" (Goans, Leach, Vogel 29). The article starts out by saying how things used to be. "In 2000 the GSU Library had a FrontPage based web site with minimal login security, site architecture, planning, and administrative and editorial processes in place. A single librarian served in the role of network coordinator, server, administrator, and web site manager" (Goans, Leach, Vogel). This library does not seem to have a very organzied system seeing as each librarians has a different number of guides some having a few some having a lot and they are all in different styles. In 2000 the libraries first web development librarian was hired to improve the library system. "Implementing web site secuirty for Frontpage authors and exploring web site infastrucutre develpoment were amont the first step he undeetook to improve the libraries web presence. He also built MySQL applications to manage the list of databases and electronic journals as well as several major photograph collections for the special collections department" (Goans, Lech, and Vogel 30). The article then goes no to describe content management system and goes on to describe in detail the GSU Library CMS technology. Later at GSU they did a survey of the CSM. The variety of responses told them that csm was still an emerging technology that not everyone used.

Friday, October 3, 2008

jing part 2

here are my flickr url's 

jing project

here is my screencast.com url.  http://www.screencast.com/users/susan611/folders/Jing/media/d2cf94be-f1e4-4b5e-abcb-e8b9c09bedbf

my flickr images will come in a later blog.

week 7 comments

I commented here: http://ab2600.blogspot.com/ on this blog, but need to wait till she approves the comment.

I also commented here: https://www.blogger.com/comment.g?blogID=5477147704203276697&postID=674748430059711544&page=1

Tuesday, September 30, 2008

muddiest point week 6

My muddiest point for this week is network topology. The definition includes some word called nodes that I think might have been used in class before, but I am unsure as to exactly what it means.

week7 readings

How Internet Infastructure Works
The article starts out by mentioning that the internet is a global collection of both networks big and small. When you connect to your isp you become part of their network and that isp may collect to part of a large network so the internet is a network or networks. They mention something called POP that I have never heard of. "The POP is a place for local users to access the company's network, often through a local phone number or dedicated line" (Tyson). "...there are several high-level networks connecting to each other through Network Access Points or NAPs (Tyson ). The router has two jobs. 1. it ensures that information doesn't go where it's not needed. This is crucial for keeping large volumes of data from clogging the connections of "innocent bystanders." and 2. It makes sure that information does make it to the intended destination. Next it describes network backbone as "fiber optic trunk lines " (Tyson). Another term in the article is IP address which stands for internet protocol and is a unique number indentifying every machine on the internet. In 1983 the Domain name system was created because of the over abundance of IP addresses. When ever you use the web or send an e-mail you are using a domain name. URL which is a uniform resource locator contains a domain name. Another important part of the internet is the server and client. Internet servers make the internet possible. The artilce defines both servers and clients "The machines that provide services to other machines are servers. And the machines that are used to connect to those services are clients" (Tyson). Lastly the article mentions ports and HTTP. "Any server machine makes its services available using numbered ports -- one for each service that is available on the server" (Tyson). " Every Web server on the Internet conforms to the hypertext transfer protocol (HTTP) (Tyson).

Inside the Google Machine
This video took the viewer inside the Google machine. In the beginning it showed a globe of the world and shows dots on it. The dots represent people and their search query. It showed how people in Africa did not use the computers as much. It then showed a second of searches and it seemed so crazy as there was so many people searching at one time. They also metioned Google grants that serves charities throughout the world and how they were setting up the Ggoogle foundation. They also mentioned there socially networking site that one of the Google employees created called orkut. Google said they have a policy that 20% of your time working at Google should be spent doing things you want to do. Google answers is a service they have where you pay to ask a question and someone goes off to find answer for you. Froogle is there way to search for shopping. I was interested to find out that blogger was a Google thing. Lastly they talk about Ad sense where they put ad's on site that are relevant to what you are looking for. One ending thought they leave us with as that Google is free for people all over the world and does not charge for their search results.

Dismantling Integrated Library Systems
The article starts out by telling us that no one intended to dismantle the ILS. "

For 25 years, the ILS proved a trusty tool for solving everyday library problems. First loosely integrated, then more fully so, it finally arrived at a plateau of innovation, until the early 1990s, when librarians cautiously embraced the web as their new gateway to information. Inevitably, the old technology of the ILS clashed with new web technologies." The article then mentions how hard it is to choose a new integrated library system from a vendor.

To maintain a competitive advantage so people will choose your IlS takes work.

"Endeavor Information Systems took the research library world by storm in the mid-1990s with its Voyager system that worked with the Oracle RDBMS. Before long, Oracle was in almost every academic RFP."


"Jerry Kline, president and CEO of Innovative Interfaces, argues that achieving the "same intellectual logic" that went into older systems is the key." However ILS cannot always do everything. As the article mentions ". When libraries try to meet new needs with technology, such as federated searching, their ILS can rarely answer the call. Libraries are forced to look at new technology and create a solution themselves or purchase a standalone product." The article then mentions how new systems now dominate the library world. "Portals, metasearch tools, reference linking software, radio frequency identification tags (RFIDs), and digital asset management systems now dominate the world of library automation."

Some of these products will be able to be sold to non traditional customers such as the government and museums. The problem for libraries though is that better costs more. "Several well-intentioned research libraries attempted over the past decade to build web-based solutions to the problems ILS didn't satisfy." "

However impractical as a complete solution, the open source movement has demonstrated the value of open standards and protocols. Through XML, web services, OAI-PMH (Open Archives Initiative--Protocol for Metadata Harvesting), librarians believe they can create interoperability among systems, whether vendors' or their own.

" "Before interoperability can rescue libraries, however, the vendors need to redirect their efforts. Not only has ILS technology reached its plateau, but the market itself is almost completely saturated:" We are left at the end with a final thought. "

Library vendors have two choices. They can continue to maintain large systems that use proprietary methods of interoperability and promise tight integration of services for their customers. Or, they can choose to dismantle their modules in such a way that librarians can reintegrate their systems through web services and standards, combining new with the old modules as well as the new with each other."





Monday, September 29, 2008

cite u like url

here is my url to the zotero/cite u like assignment:

http://www.citeulike.org/user/susan611

Tuesday, September 23, 2008

week 6 readings

Youtube video
This viedo was very straightforward and mentioned all the different types of connections and then explained them. The different types mentioned where PAN which is for a pc, LAN which would be for a building like your house, CAN is a college area network, WAN is a wide area network, and MAN is a metropolitan area network.

Wikipedia on Computer Networks
This article was similar to the youtbue video in that it explained the same networks as the vide did. It did however mention one other one that the video negelcted to talk about. This is the global area netwrok which is a "model for supporting mobile communications across an arbitrary number of wireless lans, satellite, coverage areas etc" (wiki pg 4). The wiki also mentioned major categories used to classify networks such as scale, connection method, functional relationship and network topology. The one part of the article that I found interesting is the definition of the varients of interwork as I never knew quite what each one meant. Lastly the article mentions basic harware components such as a router that is used to "forward data packets between networks using header and forwarding tables to determine the best path to forward packets.

Wikipedia on Local Area Networks
Compared the wiki on computer networks this one was a lot shorter. It starts out by mentioning the history of it. I was amazed to see how long it had been around. It then focues on the LAN for the pc and mentioned a few technical aspects. From 1983 onward the coming year would regulary be declared the year of the LAN.

Management of RFID in Libraries
Before reading this article i had no idea what RFID even was. The RF stands for radio frequency and the ID means identifier. The article described that the "tag consists of a computer chip and an antenna, often printed on paper or some other flexible medium" (RFID pg 1). The RFID chip has the potential to carry lots of information. There are hundreds of RFID products on the market as the article mentioned. I find it interesting that some librarians question whether they should consider using RFID because of privacy issues. It seems like ut could be of help to them so why not use it. It seems quite uself that libraries can use RFID as a security mechanism and makes me wonder why most libraries haven't done this already. The fact that the RFID tags can be shielded by the thing mentioned in the ad makes me worry though. I guess this is the reason that libraries are so hesitant to adopt this technology. There do seems to be a lot of good uses for RFID in libraries though such as the tags being able to to be read while the book is on the shelf.

muddiest point

My muddiest point for the week is this:

In the slides it says that raster images = bitmpas. This does not make sense because I do not know what a bitmap is.


Wednesday, September 17, 2008

week 5 comments

https://www.blogger.com/comment.g?blogID=1129785935180596689&postID=3512837352940137481&page=1

https://www.blogger.com/comment.g?blogID=3799366651359702810&postID=3384647508895216250&page=1

week 5 readings

Wikipedia
The wikipedia provided a good overview on data compression for me because I did not know anything about it. The article says that compression is useful because "it helps reduce the consumption of expensive records, such as hard disk space or transmission bandwith" (pg 1). This article mentions lossless vs lossy compression. Lossless compression is when the senders data be compressed without loss of data. With lossy compression however the data once it has been compressed is not the same as the original data. The wiki also give examples of those kinds of compression. One example of lossless data compression is called run-length encoding. This is where "large runs of consecutive identical data values are replaced by a simple code with the data value and the length of the run" pg 2). I also did not know that lossy digital compression was used in digital cameras and found that interesting.

Youtbe and Libraries
I found this article to be very interesting. I have been using youtube for a while to watch videos and never saw the potential that it had in libraries. This article made some good points on how it an be used to benefit the library. If the library where to post instructional videos for patrons to watch it could prove useful. Patrons would be able to find out what they needed to know without bothering a librarian and the librarian would have the time to do other important tasks.
Imaging Pittsburgh
This article was a nice break from all the ones we read that are full of big technical terms I do not understand. I found this one quite easy to read and it was enjoyable to read. The project seemed like it was a big undertaking though because the article mentioned a minimum of 7,000 pictures and by the time it was done over 10,000.

The last article on data compression basics was long and confusing to read at parts. The author said we did not have to read the notes in the box if we did not want to and I figured I would because it could only help. reading those notes in the box though made it even more confusing. The article covered a lot of the basics that the wiki did such as the difference between lossless and lossy data compression, however, it went into a lot more detail on the topic. The article was very hard to keep up with because I know so little about computers and it was very technical. The article talked about run length encoding which replaces runs with a single character followed by the length of the run. Part two had a section on color space that was confusing. he mad a statement in the beginning stating that most of us should have seen certain terms and I have never seen them before making it harder for me.

muddiest point

My muddiest point is about the databases. We mentioned database software for windows, but i was wondering if there was any out there for Mac's?

Friday, September 12, 2008

week 4 comments

https://www.blogger.com/comment.g?blogID=4181925387762663697&postID=4607901818617335487&page=1

https://www.blogger.com/comment.g?blogID=4736393327020365268&postID=2541635144849515509&page=1

Wednesday, September 10, 2008

week 4 readings

Database Wikipedia
I so not know much about computers so I did not know anything before I read the database article. The article was very informative as I did not know there was more than one database management system. Three different types of database management systems are hierarchical model, network model, and relational model. In the hierarchical model date is organized in a tree like structure. (wiki pg 2) In the network model records can participate in any number of relationship (wiki pg 3) In the relational model is a model of tables and row where the date is about a particular entity is represented in rows and columns.


Introduction to Metadata
I had no idea what metadata was before I read this article so it was informative to me. It was a bit confusing though because it seemed like metadata could be a lot of different things and mean a lot of different thing and I did not understand how. It says in the first line that metadata is data about data which I took to understand that metadata is something that gives us more information about that thing. I was surprised to see how many different types of metadata there is. The end caught my attention because I think it is interesting how metadata may be able to be used in an almost finite number of ways to meet the needs of non-traditional users (pg 7).

An Overview of the Dublin Core Model
I did not quit understand this article. It said that the DCMI is an international effort designed to foster consensus across disciplines for discovery-oriented description of diverse resources. (pg1) I did not quite get what that meant. I also did not understand the text they had in the article that was different than the normal text because it seemed like some computer code or computer language.

muddiest point #3

My muddiest point is from the power point slide entitled what do OS do. It says that utility management manages the I/O devices, but I am confused because I do not know what the I/O devices are.

Friday, September 5, 2008

flickr url

Here is my flickr url: http://www.flickr.com/photos/30232031@N05/

all the photos for the assignment are up

Thursday, September 4, 2008

blog comments

Here is a link to my blog comments:

https://www.blogger.com/comment.g?blogID=997037450799707033&postID=4756828819278602490&page=1


https://www.blogger.com/comment.g?blogID=4527425204800506090&postID=8427445099349409461&page=1

muddiest point #2

The muddiest point for me is the assignment due Tuesday. I know we have to post the pictures to a flickr account and that they have to be three different sizes. The part that confuses me is how to change the size of the picture.

Wednesday, September 3, 2008

week 3 readings

An update on the windows road map
This first article was rather short and very straightforward. It was an e-mail written by Bill Veghte the Senior VP of Microsoft. The e-mail talked about the new Windows Vista and touched on how it effected Windows XP and its users. I have always been a Windows user, but have not yet switched to Vista do to hearing some negative things about it from people. I also have a lot of software on my PC that I use for school and other everyday things. I am worried if some of these things including any games I would want to play would be compatible with Vista. When April 2014 roles around and they no longer offer updates and such for us XP users I like many others may finally have to bit the bullet and switch to Vista if we want to make sure we are secure and can receive technical support when needed.

What is Mac OS X
I did not think this would be a hard article to read. I expected it to talk about a bit about the history of the Apple company and how Mac came to be invented. It did do just that in the beginning. I also expected it to go into the differences between Mac and Windows and the different kind of software available for a Mac such as imovie and Opera. When I saw the article I expected a general overview of the Mac operating system, however, what I got was a complex reading with many terms I did not understand. It was not general at all and instead was more in depth and complex. I find this article interesting for one reason. The author states in the beginning that he wrote this document as a supplement to a speech he gave to people who knew nothing about Mac's to begin with. Yet in my opinion the article is far too technical for a first time Mac user. My guess is that these people have some experience with computers because the authors states that "...the implicit assumption is that you are familiar with fundamental concepts of one or more of BSD, Mach, UNIX, or operating systems in general." I do not know if this just me or not, but I feel that since I am taking this class it means I am not as educated about PC's as others and would not know the things this author is assuming I am familiar with, therefore, the article goes right over my head. Even basic things like when he used the word kernel I immediately thought corn kernel and had to look it up to see how it related to computers. Knowing nothing about Macs and only the basics of computers I had a thought time following along especially the section on the open firmware and others that showed boxes with computer text in them because I had know clue what the text meant or what was being accomplished there. After muddling my way through this article I can see how complex a Mac really can be.

Mac OS X- Wikipedia
I enjoyed this article a lot more than the other one on the Mac. This article was a lot easier for me to comprehend. This article mentions a software development tool called Xcode which was not talked about in the other article. I did not quite understand what that software did. I saw in the article that "The API's that Mac OS X inherited from OpenStep are not backward compatible with the earlier versions of Mac OS"which sounds as if it could be a pain to people. The Carbon API sounds like it is a great idea because applications written with it can be used in both old Mac OS 9 and Mac OS X thus making the transition smooth like the article stated. One interesting thing I learned from this article and the other is that the X in Mac OS X is the roman number ten and not the letter "X" like I thought it was. I knew nothing about Mac before reading these article so that is the reason I did not know this. I also found another interesting point in the article. It said that Mac OS X versions after named after big cats. What will happen when they run out of big cat names? Will they reuse names they already did or will a Mac OS 11 be out by then?

Linux
The Linux book was interesting to read seeing as I had no previous experience with Linux at all. I have seen the red hat computers in the computer lab that run Linux. I tried to use one once, but I could not understand it. Reading this book cleared things up for me a bit, but it still seems harder to me to learn. I like how easy Windows is to navigate. I found it interesting how Linus Torvald who was only a college student was able to do so much for Linux. Linux does seem like a good operating system that could be comparable to Windows or Mac OS X. While reading the article I did notice that it had a lot of pros for using Linux. The fact that it is free is a pretty cool thing that should have people flocking to it because of how over priced an instillation cd for Mac or Windows can be, but it does seem difficult to navigate and does not look quite like Mac or Windows which makes people shy away from it. I feel a lot of people do not give Linux a fair shot and neither have I. If I really took the time to learn it though I might actually end up liking it. What seemed confusing to me though is all the different distributions of it and having to decide which one to choose. I also liked the idea of the open source were people can modify it and when there is a problem it can be fixed as soon as possible rather than waiting for a company to do it. The only problem is that people shy away from Linux because not all program support it and people do want to be able to play there ganes and other acitivies such as listen to music on their pc's. I noticed that 11.2.2 says "Some distributions don't allow you to play MP3's without modifying your configuration, this is due to license restrictions on the MP3 tools. You might need to install extra software to be able to play your music." I nor anyone else do not want to have to do extra work just to do simple basic taks on our computer. Linix overall seems interesting, but I'll take my Windows OS anyday over Linux.

Monday, September 1, 2008

comment links

https://www.blogger.com/comment.g?blogID=5629900467800061574&postID=7803994879703954736

https://www.blogger.com/comment.g?blogID=5720842264846496247&postID=3503654554221870264

Saturday, August 30, 2008

week 2 readings

The Lynch article had some interesting points. definitely agree with him that we need to go out into the world prepared to handle the different kinds of technology. I also agree with that main point that he makes when he says that teaching information technology should be part of the curriculum. I feel elementary and middle schools should have computer classes that teach the kids the basics of how to use them. The article mentioned how important it was for people to need to have a good information literacy and now how to work the newest technology. While I do agree with that I also can't help, but think of older people 60+ who have lived most of their lives without the aide of technology and have no need for it now at and old age. That could the one exception where information literary and an understanding of technology is not needed.

The Lied Library was the one I enjoyed the most. I work at an academic library myself, but only as a student worker. Working in an academic setting I thought I knew how much I took to make a library function well. This article opened my eyes though to all the behind the scenes work I do not see going on such as up keep of the computers and updating them when needed. I also did not know how expensive it was to run a library that was as technologically advanced as that. When they said that for hardware and operating system support only the cost run into the tens of thousands I was shocked. This article made me appreciate the school libraries more know that I know how much they really do.

OCLC article touched on a very important part of the libraries today and that is digitization. What happems when people can get everything they need online and have no need for "containers" i.e. books, cds, journals. Will there still be a need for libraries. One interesting setence that I noted in this reading is that "Libraries need to find ways to deliver quality content to mobile devices. While I do think this is a very good idea I can't help, but imaghine how long it would take to get this to happen and at what price would it come to the libraires to maintain this type of system as well. In this ever changing digital world libraries need to find a way to keep up with it all.

Thursday, August 28, 2008

my comments for the week

here is my links to my two comments for the week:


https://www.blogger.com/comment.g?blogID=5720842264846496247&postID=6601010638085433542


https://www.blogger.com/comment.g?blogID=3413864360557025238&postID=4591943210401317937&page=1

Wednesday, August 27, 2008

muddiest point #1

The muddiest point so far to me was this blog. I am new to blogging and do not know
much about it. I do not know what a feed really is and will have do a bit of my own research to figure this blog thing out.

week 2 readings

I thought that I was fairly knowledgeable when it came to the various aspects of computers, but after reading the Wikipedia article on computer hardware I realized how little I really know. I did not know what a motherboard was or other items mentioned. The article was a good in depth look at computer hardware and covered a lot. Parts of it where confusing to me especially in a section on internal buses. They used many acronyms in that section such as PCI, AGB, and VLB that I did not understand.

The computer history museum website was interesting to me. It was interesting to see the history of the internet. I had no clue that predecessor of the internet started so early in the 1960's. I wasn't even aware that technology that advanced was being invented that early. I thought the arpanet wasn't being tested till the late 70's into the 80's, but I was wrong. The history of semiconductors was also interesting. This was also new to me so enjoyed learning something new.

The Moore's Law Article

This article throughly confused me. I had to read it more that once to fully understand it and also had to look at other sites as well as look up some of the words to completely grasp the concept. I had to look up the words transistor and semiconductor to understand what they meant so I could proceed in writing this. I did not read all of the blogs yet, but one or two that I did seem to agree with me on some ideas I had when reading this. One thing I noticed was the second sentence. "The number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years." I am wondering how big this integrated circuit is that it can keep getting these transistors placed on it? Will there ever be a point when it cannot hold any more transistors and thus Moore's law will be no more? The article seemed unclear on that as it seemed to make a case for both sides of the issue which left me wondering. I also found interesting the section on the futurists. In this section a man name Kurzweil commented on how some new type of technology will replace current integrated circuit technology. One can only image what that would be and it leaves me anxious to await all the new technological advances to be made in the future.