<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    jojo's blog--快樂憂傷都與你同在
    為夢想而來,為自由而生。 性情若水,風起水興,風息水止,故時而激蕩,時又清平……
    posts - 11,  comments - 30,  trackbacks - 0

    What is memcached?

    memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

    Danga Interactive developed memcached to enhance the speed of LiveJournal.com, a site which was already doing 20 million+ dynamic page views per day for 1 million users with a bunch of webservers and a bunch of database servers. memcached dropped the database load to almost nothing, yielding faster page load times for users, better resource utilization, and faster access to the databases on a memcache miss.

    How it Works

    First, you start up the memcached daemon on as many spare machines as you have. The daemon has no configuration file, just a few command line options, only 3 or 4 of which you'll likely use:

    # ./memcached -d -m 2048 -l 10.0.0.40 -p 11211

    This starts memcached up as a daemon, using 2GB of memory, and listening on IP 10.0.0.40, port 11211. Because a 32-bit process can only address 4GB of virtual memory (usually significantly less, depending on your operating system), if you have a 32-bit server with 4-64GB of memory using PAE you can just run multiple processes on the machine, each using 2 or 3GB of memory.

    Porting the Application

    Now, in your application, wherever you go to do a database query, first check the memcache. If the memcache returns an undefined object, then go to the database, get what you're looking for, and put it in the memcache:

    Perl Example (see APIs page)

    sub get_foo_object {
    my $foo_id = int(shift);
    my $obj = $::MemCache->get("foo:$foo_id");
    return $obj if $obj;

    $obj = $::db->selectrow_hashref("SELECT .... FROM foo f, bar b ".
    "WHERE ... AND f.fooid=$foo_id");
    $::MemCache->set("foo:$foo_id", $obj);
    return $obj;
    }

    (If your internal API was already clean enough, you should only have to do this in a few spots. Start with the queries that kill your database the most, then move to doing as much as possible.)

    You'll notice the data structure the server provides is just a dictionary. You assign values to keys, and you request values from keys.

    Now, what actually happens is that the API hashes your key to a unique server. (You define all the available servers and their weightings when initializing the API) Alternatively, the APIs also let you provide your own hash value. A good hash value for user-related data is the user's ID number. Then, the API maps that hash value onto a server (modulus number of server buckets, one bucket for each server IP/port, but some can be weighted heigher if they have more memory available).

    If a host goes down, the API re-maps that dead host's requests onto the servers that are available.

    Shouldn't the database do this?

    Regardless of what database you use (MS-SQL, Oracle, Postgres, MySQL-InnoDB, etc..), there's a lot of overhead in implementing ACID properties in a RDBMS, especially when disks are involved, which means queries are going to block. For databases that aren't ACID-compliant (like MySQL-MyISAM), that overhead doesn't exist, but reading threads block on the writing threads.

    memcached never blocks. See the "Is memcached fast?" question below.

    What about shared memory?

    The first thing people generally do is cache things within their web processes. But this means your cache is duplicated multiple times, once for each mod_perl/PHP/etc thread. This is a waste of memory and you'll get low cache hit rates. If you're using a multi-threaded language or a shared memory API (IPC::Shareable, etc), you can have a global cache for all threads, but it's per-machine. It doesn't scale to multiple machines. Once you have 20 webservers, those 20 independent caches start to look just as silly as when you had 20 threads with their own caches on a single box. (plus, shared memory is typically laden with limitations)

    The memcached server and clients work together to implement one global cache across as many machines as you have. In fact, it's recommended you run both web nodes (which are typically memory-lite and CPU-hungry) and memcached processes (which are memory-hungry and CPU-lite) on the same machines. This way you'll save network ports.

    What about MySQL 4.x query caching?

    MySQL query caching is less than ideal, for a number of reasons:

    • MySQL's query cache destroys the entire cache for a given table whenever that table is changed. On a high-traffic site with updates happening many times per second, this makes the the cache practically worthless. In fact, it's often harmful to have it on, since there's a overhead to maintain the cache.
    • On 32-bit architectures, the entire server (including the query cache) is limited to a 4 GB virtual address space. memcached lets you run as many processes as you want, so you have no limit on memory cache size.
    • MySQL has a query cache, not an object cache. If your objects require extra expensive construction after the data retrieval step, MySQL's query cache can't help you there.

    If the data you need to cache is small and you do infrequent updates, MySQL's query caching should work for you. If not, use memcached.

    What about database replication?

    You can spread your reads with replication, and that helps a lot, but you can't spread writes (they have to process on all machines) and they'll eventually consume all your resources. You'll find yourself adding replicated slaves at an ever-increasing rate to make up for the diminishing returns each additional slave provides.

    The next logical step is to horizontally partition your dataset onto different master/slave clusters so you can spread your writes, and then teach your application to connect to the correct cluster depending on the data it needs.

    While this strategy works, and is recommended, more databases (each with a bunch of disks) statistically leads to more frequent hardware failures, which are annoying.

    With memcached you can reduce your database reads to a mere fraction, leaving the databases to mainly do infrequent writes, and end up getting much more bang for your buck, since your databases won't be blocking themselves doing ACID bookkeeping or waiting on writing threads.

    Is memcached fast?

    Very fast. It uses libevent to scale to any number of open connections (using epoll on Linux, if available at runtime), uses non-blocking network I/O, refcounts internal objects (so objects can be in multiple states to multiple clients), and uses its own slab allocator and hash table so virtual memory never gets externally fragmented and allocations are guaranteed O(1).

    What about race conditions?

    You might wonder: "What if the get_foo() function adds a stale version of the Foo object to the cache right as/after the user updates their Foo object via update_foo()?"

    While the server and API only have one way to get data from the cache, there exists 3 ways to put data in:

    • set -- unconditionally sets a given key with a given value (update_foo() should use this)
    • add -- adds to the cache, only if it doesn't already exist (get_foo() should use this)
    • replace -- sets in the cache only if the key already exists (not as useful, only for completeness)
    Additionally, all three support an expiration time.
    posted on 2009-05-07 13:50 Blog of JoJo 閱讀(562) 評論(0)  編輯  收藏 所屬分類: Linux 技術相關

    <2025年5月>
    27282930123
    45678910
    11121314151617
    18192021222324
    25262728293031
    1234567

    常用鏈接

    留言簿(6)

    隨筆檔案

    文章分類

    文章檔案

    新聞分類

    新聞檔案

    相冊

    收藏夾

    搜索

    •  

    最新評論

    閱讀排行榜

    評論排行榜

    主站蜘蛛池模板: 亚洲欧洲国产综合AV无码久久| 亚洲国产成人久久精品99 | 亚洲精品国产电影午夜| 亚洲AV无码乱码国产麻豆穿越 | 亚洲国产第一站精品蜜芽| 在线精品亚洲一区二区三区| 久久久久久久亚洲精品| 久久亚洲国产成人精品无码区| 国产91精品一区二区麻豆亚洲 | 成年18网站免费视频网站| 毛片免费观看的视频| 成人永久免费高清| 亚洲国产一成久久精品国产成人综合 | 亚洲午夜电影在线观看| 亚洲av产在线精品亚洲第一站| 亚洲va成无码人在线观看| 男人天堂2018亚洲男人天堂| 亚洲一本到无码av中文字幕| 亚洲av日韩aⅴ无码色老头 | 亚洲AV色吊丝无码| 亚洲综合色一区二区三区| 偷自拍亚洲视频在线观看99| 一级做性色a爰片久久毛片免费| 久久国产精品免费一区二区三区| 久久久久久久99精品免费观看| 最近免费中文字幕大全免费| 男男AV纯肉无码免费播放无码| 国产在线播放免费| 永久亚洲成a人片777777| 亚洲高清在线观看| 色婷五月综激情亚洲综合| 小说区亚洲自拍另类| 最近免费字幕中文大全| 亚洲成人在线免费观看| 永久免费看bbb| 中文字幕一精品亚洲无线一区| 久久久久亚洲AV无码专区体验| 亚洲最大无码中文字幕| 久久国产精品免费| 69天堂人成无码麻豆免费视频| 亚洲AV成人潮喷综合网|