<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    這廝

    observing

      BlogJava :: 首頁(yè) :: 聯(lián)系 :: 聚合  :: 管理
      48 Posts :: 3 Stories :: 3 Comments :: 0 Trackbacks

    #

    posted @ 2012-04-08 00:21 cnbarry 閱讀(292) | 評(píng)論 (0)編輯 收藏

    Money site - > tie1 - > SB blast(blog comments)
    But this is far from enough
    plus daily posting on forum/blogging/bookmark/article..
    posted @ 2012-04-04 15:29 cnbarry 閱讀(379) | 評(píng)論 (0)編輯 收藏

    都說(shuō),機(jī)器他媽是人,機(jī)器木有感情,人如果說(shuō):這個(gè)人像機(jī)器一樣,那就是說(shuō)他呆滯沒(méi)有感情,但是,我覺(jué)得家里的錨、路由和我的電腦是有感情的!這幾天一直在為家里上網(wǎng)的事給郁悶著, 插上網(wǎng)線的時(shí)候遇到一個(gè)千古難聞的怪事。這個(gè)錨, 它要讓人站著! 不然它就會(huì)把ADSL燈熄滅, 意思就是不給上網(wǎng)! 我只要一坐下來(lái),燈就滅,一站起來(lái),它就又好了!?。?!一坐它又熄滅,一站它就好! 我就歇一會(huì)兒站一會(huì)兒的, 跟它玩了無(wú)數(shù)次,次次如此,無(wú)一例外?。?!  郁悶ing! 其實(shí)也不是一定不能坐下, 但如果要坐的話,我就必須要把電腦和錨放得很近(不是用無(wú)線的哦),而且必須臉部高于錨和電腦!一定要俯視它們! 這樣的問(wèn)題居然都會(huì)發(fā)生, 我在想是不是錨這樣的東西也需要人來(lái)表示誠(chéng)意呢? 實(shí)在是想不通, 錨也是剛換的, 路由器也沒(méi)有問(wèn)題, 就是要用戶(hù)使用的時(shí)候俯視著使用?。。∥业奶靺?!我該說(shuō)什么呢?。。?div> 
    posted @ 2012-03-11 18:08 cnbarry 閱讀(189) | 評(píng)論 (0)編輯 收藏

        只有注冊(cè)用戶(hù)登錄后才能閱讀該文。閱讀全文
    posted @ 2012-02-22 11:26 cnbarry 閱讀(52) | 評(píng)論 (0)編輯 收藏

    Default Ports:

    • SMTP AUTH: Port 25 or 587 (some ISPs are blocking port 25)
    • SMTP StartTLS Port 587
    • SMTP SSL Port 465
    • POP Port 110
    • POP SSL Port 995

    SMTP Server (Outgoing Messages)


    POP3 Server (Incoming Messages)

    Googlemail/Gmail SMTP POP3 Server
    smtp.gmail.com
    SSL Port 465
    StartTLS Port 587
    pop.gmail.com
    SSL Port 995
    Please make sure, that POP3 access is enabled in the account settings. Login to your account and enable POP3.
    Yahoo Mail SMTP POP3 Server
    smtp.mail.yahoo.com
    SSL Port 465
    pop.mail.yahoo.com
    SSL Port 995
    Yahoo Mail Plus SMTP POP3 Server
    plus.smtp.mail.yahoo.com
    SSL Port 465
    plus.pop.mail.yahoo.com
    SSL Port 995
    Yahoo UK SMTP POP3 Server
    smtp.mail.yahoo.co.uk
    SSL Port 465
    pop.mail.yahoo.co.uk
    SSL Port 995
    Yahoo Deutschland SMTP POP3 Server
    smtp.mail.yahoo.de
    SSL Port 465
    pop.mail.yahoo.de
    SSL Port 995
    Yahoo AU/NZ SMTP POP3 Server
    smtp.mail.yahoo.com.au
    SSL Port 465
    pop.mail.yahoo.com.au
    SSL Port 995
    O2 SMTP POP3 Server
    smtp.o2.ie
    smtp.o2.co.uk
    pop3.o2.ie
    pop3.o2.co.uk
    AT&T SMTP POP3 Server
    smtp.att.yahoo.com
    SSL Port 465
    pop.att.yahoo.com
    SSL Port 995

    NTL @ntlworld.com SMTP POP3 Server

    smtp.ntlworld.com
    SSL Port 465
    pop.ntlworld.com
    SSL Port 995

    BT Connect SMTP POP3 Server

    pop3.btconnect.commail.btconnect.com

    BT Openworld & BT Internet SMTP POP3 Server

    mail.btopenworld.com
    mail.btinternet.com
    mail.btopenworld.com
    mail.btinternet.com

    Orange SMTP POP3 Server

    smtp.orange.net
    smtp.orange.co.uk
    pop.orange.net
    pop.orange.co.uk

    Wanadoo UK SMTP POP3 Server

    smtp.wanadoo.co.ukpop.wanadoo.co.uk
    Hotmail SMTP POP3 Server
    smtp.live.com
    StartTLS Port 587
    pop3.live.com
    SSL Port 995
    O2 Online Deutschland SMTP POP3 Server
    mail.o2online.depop.o2online.de
    T-Online Deutschland SMTP POP3 Server
    smtpmail.t-online.de (AUTH)
    securesmtp.t-online.de (SSL)
    popmail.t-online.de (AUTH)
    securepop.t-online.de (SSL)
    1&1 (1and1) SMTP POP3 Server
    smtp.1and1.com
    StartTLS Port 25 or 587
    pop.1and1.com
    SSL Port 995
    1&1 Deutschland SMTP POP3 Server
    smtp.1und1.de
    StartTLS Port 25 or 587
    pop.1und1.de
    SSL Port 995
    Comcast SMTP POP3 Server
    smtp.comcast.net
    Port 587
    mail.comcast.net
    Verizon SMTP POP3 Server
    outgoing.verizon.net
    Port 587
    incoming.verizon.net

    posted @ 2012-02-13 15:12 cnbarry 閱讀(818) | 評(píng)論 (0)編輯 收藏

    Target:
    部署環(huán)境-mssql 2005+XP pro3+IIS5.1
    資料-.net網(wǎng)站+mssql2005備份的bak數(shù)據(jù)庫(kù)文件

    Detail:
    一、配置和安裝環(huán)境。
    第一手資料只有.net開(kāi)發(fā)的網(wǎng)站+mssql2000備份數(shù)據(jù)庫(kù)。
    問(wèn)題1:安裝IIS(我用過(guò)的電腦4年中很少遇到是已經(jīng)安裝IIS本地服務(wù)器的,除非是學(xué)校老師或者已經(jīng)有開(kāi)發(fā)人員用過(guò)的),這里說(shuō)的安裝就不討論那種原來(lái)電腦都已經(jīng)有相應(yīng)IIS配置文件和服務(wù)的情況了, 只說(shuō)假設(shè)電腦只有IIS這項(xiàng)未安裝服務(wù)而且均未準(zhǔn)備安裝文件的情況。由于已經(jīng)有過(guò)成功安裝IIS完整安裝包的經(jīng)驗(yàn),這次安裝的時(shí)候相對(duì)沒(méi)那么吃力;

    首先:選擇合適的安裝包。大概有兩點(diǎn)需要注意,一個(gè)先檢查OS情況, 有些非專(zhuān)業(yè)版XPOS根本就沒(méi)有IIS這一項(xiàng)服務(wù)配置的, 自己想辦法吧,只要能在添加/刪除組件服務(wù)一項(xiàng)能看到IIS服務(wù)就可以了。 二個(gè)找對(duì)應(yīng)版本的IIS安裝包下載, 比如,我用到的XP pro3 是可以裝IIS5.1的,其它版本如IIS6里,是找不到需要的安裝文件的。(這部分不截圖了)

    其次:選擇合適的版本。設(shè)置服務(wù)器默認(rèn)網(wǎng)站里的信息,包括添加asp文件支持等。這里有一點(diǎn)非常重要, 如果是.net技術(shù)做的網(wǎng)站,必須事先在電腦安裝.net framework 2.0或者4.0,然后在asp一項(xiàng)配置選擇net2.0,或者net4.0;如果裝了.net還出現(xiàn)server application error, 很明顯是服務(wù)器組件安裝的配置不正確,要么是沒(méi)有找到需要的系統(tǒng)文件,要么就是安裝的.net或其他必須的組件版本不對(duì)應(yīng); 如果部分文件出現(xiàn)找不到, 重新安裝對(duì)應(yīng)的版本,如果是不對(duì)應(yīng), 可以嘗試刪除.net其它所有版本,.net3 .net4(是否是當(dāng)前需要的版本可以到services管理中查看是否還存在(services.msc一下)。 

    IIS和dot net都配置成功,訪問(wèn)本地的htm文件沒(méi)有問(wèn)題,訪問(wèn)aspx文件也木有問(wèn)題。

    二、安裝數(shù)據(jù)庫(kù)。
    之所以要安裝數(shù)據(jù)庫(kù),還是回到原來(lái)說(shuō)的, 一切皆源自他人。 首先看,mssql2005數(shù)據(jù)庫(kù), okay, 鎖定官網(wǎng)下載mssql server, 太慢,改到國(guó)內(nèi)軟件下載. 先下載了一款 Microsoft SQL Server Management Studio Express, 安裝完成結(jié)果全英文, 而且無(wú)法恢復(fù)備份, 后來(lái)網(wǎng)上找一下才發(fā)現(xiàn)要的express版本不對(duì),已經(jīng)安裝的只有查看數(shù)據(jù)庫(kù)功能沒(méi)有新建數(shù)據(jù)庫(kù)、DB管理等功能于是換成了另一份較大的文件才處理掉這個(gè)問(wèn)題。

    不寫(xiě)了,多思考, 別瞎搞。 
    --Barry
    補(bǔ)充下:
    MSSQL2000數(shù)據(jù)庫(kù)備份的BAK文件網(wǎng)上有詳細(xì)的轉(zhuǎn)化過(guò)程圖解和介紹。
    可看:http://www.cnblogs.com/dlwang2002/archive/2009/03/20/1417953.html

    posted @ 2012-02-12 20:35 cnbarry 閱讀(1021) | 評(píng)論 (0)編輯 收藏

    1. Introduction

    (Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.) 
    The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as Yahoo! or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines.

    1.1 Web Search Engines -- Scaling Up: 1994 - 2000

    Search engine technology has had to scale dramatically to keep up with the growth of the web. In 1994, one of the first web search engines, the World Wide Web Worm (WWWW) [McBryan 94] had an index of 110,000 web pages and web accessible documents. As of November, 1997, the top search engines claim to index from 2 million (WebCrawler) to 100 million web documents (from Search Engine Watch). It is foreseeable that by the year 2000, a comprehensive index of the Web will contain over a billion documents. At the same time, the number of queries search engines handle has grown incredibly too. In March and April 1994, the World Wide Web Worm received an average of about 1500 queries per day. In November 1997, Altavista claimed it handled roughly 20 million queries per day. With the increasing number of users on the web, and automated systems which query search engines, it is likely that top search engines will handle hundreds of millions of queries per day by the year 2000. The goal of our system is to address many of the problems, both in quality and scalability, introduced by scaling search engine technology to such extraordinary numbers.

    1.2. Google: Scaling with the Web

    Creating a search engine which scales even to today's web presents many challenges. Fast crawling technology is needed to gather the web documents and keep them up to date. Storage space must be used efficiently to store indices and, optionally, the documents themselves. The indexing system must process hundreds of gigabytes of data efficiently. Queries must be handled quickly, at a rate of hundreds to thousands per second.

    These tasks are becoming increasingly difficult as the Web grows. However, hardware performance and cost have improved dramatically to partially offset the difficulty. There are, however, several notable exceptions to this progress such as disk seek time and operating system robustness. In designing Google, we have considered both the rate of growth of the Web and technological changes. Google is designed to scale well to extremely large data sets. It makes efficient use of storage space to store the index. Its data structures are optimized for fast and efficient access (see section 4.2). Further, we expect that the cost to index and store text or HTML will eventually decline relative to the amount that will be available (see Appendix B). This will result in favorable scaling properties for centralized systems like Google.

    1.3 Design Goals

    1.3.1 Improved Search Quality

    Our main goal is to improve the quality of web search engines. In 1994, some people believed that a complete search index would make it possible to find anything easily. According to Best of the Web 1994 -- Navigators,  "The best navigation service should make it easy to find almost anything on the Web (once all the data is entered)."  However, the Web of 1997 is quite different. Anyone who has used a search engine recently, can readily testify that the completeness of the index is not the only factor in the quality of search results. "Junk results" often wash out any results that a user is interested in. In fact, as of November 1997, only one of the top four commercial search engines finds itself (returns its own search page in response to its name in the top ten results). One of the main causes of this problem is that the number of documents in the indices has been increasing by many orders of magnitude, but the user's ability to look at documents has not. People are still only willing to look at the first few tens of results. Because of this, as the collection size grows, we need tools that have very high precision (number of relevant documents returned, say in the top tens of results). Indeed, we want our notion of "relevant" to only include the very best documents since there may be tens of thousands of slightly relevant documents. This very high precision is important even at the expense of recall (the total number of relevant documents the system is able to return). There is quite a bit of recent optimism that the use of more hypertextual information can help improve search and other applications [Marchiori 97] [Spertus 97] [Weiss 96] [Kleinberg 98]. In particular, link structure [Page 98] and link text provide a lot of information for making relevance judgments and quality filtering. Google makes use of both link structure and anchor text (see Sections 2.1 and 2.2).

    1.3.2 Academic Search Engine Research

    Aside from tremendous growth, the Web has also become increasingly commercial over time. In 1993, 1.5% of web servers were on .com domains. This number grew to over 60% in 1997. At the same time, search engines have migrated from the academic domain to the commercial. Up until now most search engine development has gone on at companies with little publication of technical details. This causes search engine technology to remain largely a black art and to be advertising oriented (seeAppendix A). With Google, we have a strong goal to push more development and understanding into the academic realm.

    Another important design goal was to build systems that reasonable numbers of people can actually use. Usage was important to us because we think some of the most interesting research will involve leveraging the vast amount of usage data that is available from modern web systems. For example, there are many tens of millions of searches performed every day. However, it is very difficult to get this data, mainly because it is considered commercially valuable.

    Our final design goal was to build an architecture that can support novel research activities on large-scale web data. To support novel research uses, Google stores all of the actual documents it crawls in compressed form. One of our main goals in designing Google was to set up an environment where other researchers can come in quickly, process large chunks of the web, and produce interesting results that would have been very difficult to produce otherwise. In the short time the system has been up, there have already been several papers using databases generated by Google, and many others are underway. Another goal we have is to set up a Spacelab-like environment where researchers or even students can propose and do interesting experiments on our large-scale web data.

    source: http://infolab.stanford.edu/~backrub/google.html

    posted @ 2012-02-04 20:42 cnbarry 閱讀(370) | 評(píng)論 (0)編輯 收藏

    Abstract

           In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ 
           To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. 
           Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
    posted @ 2012-02-04 20:39 cnbarry 閱讀(290) | 評(píng)論 (0)編輯 收藏

    Email Address
    [A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}
    \b[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b
    Email Address (Anchored)
    ^[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}$
    Email Address without Consecutive Dots
    \b[A-Z0-9._%-]+@(?:[A-Z0-9-]+\.)+[A-Z]{2,4}\b
    Email Address on Specific Top Level Domains
    ^[A-Z0-9._%-]+@[A-Z0-9.-]+\.(?:[A-Z]{2}|com|org|net|biz|info|name|aero|biz|info|jobs|museum|name)$
    posted @ 2012-02-01 21:03 cnbarry 閱讀(203) | 評(píng)論 (0)編輯 收藏

    Today, founder of the non-profit behind information archive Wikipedia, Jimmy Wales, announced that the site will go dark for 24 hours on Wednesday in protest of the Stop Online Piracy Act (SOPA).
    Quote:
    Jimmy Wales @jimmy_wales TWITTER update
    Student warning! Do your homework early. Wikipedia protesting bad law on Wednesday! #sopa
    While only the English version of the site will be down, it accounts for 25 million daily visitors according to Wales:
    Quote:
    Jimmy Wales @jimmy_wales 
    comScore estimates the English Wikipedia receives 25 million average daily visitors globally.
    When we talked to Wales in November, he told us that Wikipedia had over 420m unique monthly visitors, and there are now over 20 million articles on Wikipedia across almost 300 languages.
    As we reported last week, the site was contemplating taking this action along with Reddit who announced that it would black out its site in protest against SOPA.
    The 24 hour shutdown of Wikipedia will be replaced with instructions on how to reach out to your local US members of congress, and Wales says he hopes the measure will “melt phones” with volume:
    Quote:
    Jimmy Wales @jimmy_wales
    This is going to be wow. I hope Wikipedia will melt phone systems in Washington on Wednesday. Tell everyone you know!
    Along with Reddit, Wikipedia joins huge Internet names like WordPress, Mozilla, and all of the Cheezburger properties in Wednesday’s “black out” protest.
    The proposed act endangers the future of sites like these by holding them directly accountable for content placed on them. It has been widely reported that if an act like this passed through and became actionable, many Internet businesses would suffer greatly due to new scrutiny placed on them by the government.
    posted @ 2012-01-17 08:17 cnbarry 閱讀(275) | 評(píng)論 (0)編輯 收藏

    僅列出標(biāo)題
    共5頁(yè): 上一頁(yè) 1 2 3 4 5 下一頁(yè) 
    主站蜘蛛池模板: 日韩成人毛片高清视频免费看| 永久免费av无码网站大全| 边摸边吃奶边做爽免费视频99 | 亚洲色精品aⅴ一区区三区| 中国在线观看免费国语版| 中文字幕乱码一区二区免费| 香港特级三A毛片免费观看| 亚洲另类精品xxxx人妖| 亚洲AV区无码字幕中文色| 亚洲AV蜜桃永久无码精品| 免费观看的毛片手机视频| 亚洲黄色免费电影| 国产精品偷伦视频观看免费 | 日韩视频在线精品视频免费观看| 国产在线观看无码免费视频| 日韩a毛片免费观看| 亚洲AV无码国产剧情| 亚洲丁香婷婷综合久久| 亚洲jizzjizz在线播放久| 亚洲黄网在线观看| 亚洲欧洲国产精品你懂的| 亚洲中文字幕久久精品无码APP | 亚洲xxxx18| 亚洲免费网站在线观看| 久久丫精品国产亚洲av| 亚洲AV一宅男色影视| 亚洲午夜久久久久久久久久| 亚洲中文字幕丝袜制服一区| 亚洲av再在线观看| 国产hs免费高清在线观看| 日韩免费视频播放| 国产精品免费看久久久久| 国产老女人精品免费视频| 国产精品视_精品国产免费| 永久久久免费浮力影院| 国产免费黄色大片| 日本免费人成视频播放| 日韩免费福利视频| 亚洲午夜AV无码专区在线播放| 亚洲日本韩国在线| 国产AV无码专区亚洲A∨毛片|