Cmd-C 復制文件
Cmd-V 粘貼文件
Option-拖動 復制文件到新地址
Cmd-拖動 移動并自動對齊
Cmd-Delete 刪除
Cmd-Option-拖動 做替身(快捷方式)
Cmd-Shift-Delete 清空垃圾桶
Cmd-Shift-Option-Delete 強制清空垃圾桶
Tab 選定下一個項目
Shift-Tab 選定上一個項目
Return 執行默認動作
Escape 關閉對話框
Page Up 向上翻頁
向上 箭頭 選取上一個文件
Page Down 向下翻頁
向下 箭頭 選取下一個文件
Cmd-Shift-G 打開’前往文件夾’對話框
Cmd-句號 [.] 關閉對話框
Exposé 和系統的快捷
F8 切換Space
Shift-F8 慢速切換Space
F9(默認設置) 使用 Exposé 顯示所有打開的窗口
F10(默認設置) 使用 Exposé 在某個應用程序中顯示所有打開的窗口
F11(默認設置) 使用 Exposé 隱藏所有打開的窗口并顯示桌面
Cmd-H 隱藏程序
Cmd-Option-H 隱藏其他程序
Cmd-Q 退出程序
Cmd-Shift-Q 退出所有程序并且注銷用戶
Cmd-Option-Shift-Q 強制注銷用戶
Cmd-Tab 切換到下一個應用程序
Cmd-Shift-Tab 切換到上一個應用程序
Cmd-拖動 整理菜單欄
按下 Option 鍵并點按一個窗口 切換窗口并隱藏當前窗口
按住 Option 并點按 Dock 圖標 切換到另一個應用程序并隱藏當前應用程序
按下 Control 鍵并點按該項 查看某個項的快捷(上下文)菜單
將光標移到該詞上面,然后按 Cmd-Control-D 使用 Dictionary 查看對字詞在應用程序中的定義
停止響應
Cmd-句號 [.] 停止進程
Cmd-Option-Escape 打開’強制退出’
電源鍵 關機
Cmd-Option-Shift-電源鍵 強制關機或重新啟動(在某些電腦上)
Cmd-Control-電源鍵 強制重啟
Finder
Cmd-點擊 標題 查看當前窗口的路徑
Cmd-雙擊 (文件夾上) 新窗口中打開文件夾
Option-雙擊 (文件夾上) 新窗口中打開文件夾并關閉當前窗口
Cmd-1 用圖標瀏覽
Cmd-2 用列表瀏覽
Cmd-Option-向右 箭頭 列表模式下顯示包含的目錄
向左 箭頭 列表模式下關閉選定目錄
Cmd-向下 箭頭 在圖標或列表模式下打開選定目錄
Cmd-Option-向下 箭頭 在圖標或列表模式下在新窗口打開選定目錄并關閉當前窗口
Cmd-Shift-Option-向下 箭頭 (慢速)在圖標或列表模式下在新窗口打開選定目錄并關閉當前窗口
Cmd-向上 箭頭 打開上一級目錄
Cmd-Option-向上 箭頭 打開上一級目錄并關閉當前目錄
Cmd-3 用分欄瀏覽
Cmd-4 用cover flow瀏覽
Cmd-Y 打開快速查看
Cmd-Option-Y 用幻燈片顯示
Cmd-Shift-H 打開用戶文件夾
Cmd-Option-Shift-向上 箭頭 聚焦桌面
Cmd-Shift-I 打開iDisk
Cmd-Shift-D 打開桌面
Cmd-Shift-C 打開’電腦’
Cmd-Shift-K 打開網絡
Cmd-Shift-A 打開應用程序
雙擊 標題 最小化窗口
Cmd-M 最小化窗口
Option-點擊 按鈕 應用到所有激活的窗口
按下并按住滾動條 快速瀏覽長文稿
按住 Option 鍵并點按滾動條 迅速在“滾動到當前位置”和“滾動到頁面”之間切換
Cmd-波浪符號 (~) 激活當前應用程序中的上一個或下一個窗口
Dock
拖動 分割線 自定義Dock大小
Option-拖動 分割線 調整Dock到合適大小
Control-點擊 顯示Dock快捷菜單
Control-點擊圖標 顯示項目的快捷菜單
Cmd-點擊 打開圖標所在文件夾
Option-點擊 切換并隱藏當前程序
Cmd-Option-點擊 切換并隱藏所有程序
Cmd-Option-拖動 強制程序打開文件
Cmd-Option-D 顯示/隱藏Dock
啟動
*快捷鍵只能在啟動時使用
當您看到進程指示器(看起來像旋轉的齒輪)時,請按住左邊的 Shift 鍵。 防止自動登錄
聽到啟動音之后立即按住 Shift 鍵,然后當您看到進程指示器(看起來像旋轉的齒輪)時釋放該鍵。 以安全模式啟動(只
有必要的 Mac OS X 項被啟動, 一些功能和應用程序可能無法正常工作。)
在登錄屏幕上點按“登錄”按鈕之后,請按住 Shift 鍵。 防止登錄時打開“登錄項”和 Finder 窗口
C 從光盤啟動
N 從默認的 NetBoot 磁盤映像啟動
T 以目標磁盤模式啟動
Option 選擇啟動磁盤(在某些電腦上)
Cmd-X 使用 Mac OS X 而不是 Mac OS 9 來進行啟動(如果兩者均位于同一宗卷上)
按住鼠標鍵 推出可去掉的光盤
Cmd-Option-P-R 還原參數 RAM
Cmd-V 顯示詳細的狀態信息(詳細模式)
Cmd-S 以單一用戶模式啟動
Safari
Cmd-Option-F google搜索欄
Option-向上 箭頭 向上翻頁
Option-向下 箭頭 向下翻頁
Cmd-點擊 鏈接 在后臺用新標簽打開
Cmd-Shift-點擊 鏈接 打開并激活新標簽
Cmd-Option-點擊 鏈接 打開新窗口
Option-點擊 Close 按鈕 關閉其他標簽
Cmd-Shift-] 選取下一個標簽
Cmd-Shift-[ 選取上一個標簽
Cmd-Shift-H 打開主頁
Cmd-Shift-K 切換’禁止彈出窗口’
Cmd-Option-E 清空緩存
Cmd-Option-R 不用緩存并刷新頁面
Cmd-F 查找
Cmd-M 最小化窗口
Shift-點擊 按鈕 慢動作動畫效果
Cmd-加號[+] 增大字體
Cmd-減號[-] 減小字體
Cmd-0 默認字體
Dashboard
使用這些快捷來處理 Dashboard 和 Dashboard widget。
F12(默認設置) 顯示或隱藏 Dashboard
Cmd-R 重新載入當前 widget
Cmd-等號 (=) 顯示或隱藏 widget 欄
Cmd-向左箭頭鍵,Cmd-向右箭頭鍵 滾動 widget 欄
注:要更改 Dashboard 的快捷,請選取“文件”>“系統偏好設置”,點按“Exposé & Spaces”,然后點按“Exposé”。
Front Row
您可以使用鍵盤來控制 Front Row 而無需使用 Apple Remote 遙控器。
Cmd-Esc (Escape) 打開 Front Row
Cmd-Esc 或 Esc 從打開的菜單中關閉 Front Row
向上箭頭鍵,向下箭頭鍵 瀏覽菜單和列表
Cmd-Esc 或 Esc 返回上一級菜單
空格鍵或 Return 選擇菜單或列表中的項
空格鍵或 Return 播放和暫停音頻或視頻
向上箭頭鍵,向下箭頭鍵 更改音量
向右箭頭鍵,向左箭頭鍵 前往下一個或上一個歌曲或照片
向右箭頭鍵,向左箭頭鍵 前往所播放 DVD 的下一章或上一章
右箭頭鍵,左箭頭鍵(按住按鈕) 快進或倒回歌曲、視頻或 DVD
在某些 Apple 鍵盤和便攜式電腦上,您或許也可以使用特定按鍵來更改音量和控制回放。
鍵盤導航
Control-F1 打開/關閉全鍵盤控制
Control-F2 聚焦菜單欄
Control-F3 聚焦Dock
Control-F4 聚焦活躍窗口或下一個窗口
Control-F5 聚焦窗口工具欄
Control-F6 聚焦浮動窗口
Control-F7 在控制或文本框與列表之間移動
Control-F8 聚焦菜單欄中的狀態菜單
Cmd-Accent [`] 聚焦活躍應用程序的下一個窗口
Cmd-Shift-Accent [`] 聚焦活躍應用程序的上一個窗口
Cmd-Option-Accent [`] 聚焦窗口抽屜
Cmd-Option-T 顯示或隱藏字符調板
While messing around with MapReduce code, I’ve found it to be a bit tedious having to generate the jarfile, copy it to the machine running the JobTracker, and then run the job every time the job has been altered. I should be able to run my jobs directly from my development environment, as illustrated in the figure below. This post explains how I’ve “solved” this problem. This may also help when integrating Hadoop with other applications. I do by no means claim that this is the proper way to do it, but it does the trick for me.

My Hadoop infrastructure
I assume that you have a (single-node) Hadoop 1.0.3 cluster properly installed on a dedicated or virtual machine. In this example, the JobTracker and HDFS resides on IP address 192.168.102.131.Let’s start out with a simple job that does nothing except to start up and terminate:
package com.pcbje.hadoopjobs;

import java.io.IOException;
import java.util.Date;
import java.util.Iterator;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapred.Reducer;


public class MyFirstJob
{

public static void main(String[] args) throws Exception
{
Configuration config = new Configuration();

JobConf job = new JobConf(config);
job.setJarByClass(MyFirstJob.class);
job.setJobName("My first job");

FileInputFormat.setInputPaths(job, new Path(args[0));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.setMapperClass(MyFirstJob.MyFirstMapper.class);
job.setReducerClass(MyFirstJob.MyFirstReducer.class);

JobClient.runJob(job);
}


private static class MyFirstMapper extends MapReduceBase implements Mapper
{

public void map(LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException
{

}
}


private static class MyFirstReducer extends MapReduceBase implements Reducer
{

public void reduce(Text key, Iterator values, OutputCollector output, Reporter reporter) throws IOException
{

}
}
}

Now, most of the examples you find online typically shows you a local mode setup where all the components of Hadoop (HDFS, JobTracker, etc) run on the same machine. A typical mapred-site.xml configuration might look like:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>As far as I can tell, such a configuration requires that jobs are submitted from the same node as the JobTracker. This is what I want to avoid. The first thing to do is to change the fs.default.name attribute to the IP address of my NameNode.
Configuration conf = new Configuration();
conf.set("fs.default.name", "192.168.102.131:9000");And in core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>192.168.102.131:9000</value>
</property>
</configuration>This tells the job to connect to the HDFS residing on a different machine. Running the job with this configuration will read from and write to the remote HDFS correctly, but the JobTracker at 192.168.102.131:9001 will not notice it. This means that the admin panel at 192.168.102.131:50030 wont list the job either. So the next thing to do is to tell the job configuration to submit the job to the appropriate JobTracker like this:
config.set("mapred.job.tracker", "192.168.102.131:9001");You also need to change mapred-site.xml to allow external connections, this can be done by replacing “localhost” with the JobTracker’s IP address:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.102.131:9001</value>
</property>
</configuration>Restart hadoop.Upon trying to run your job, you may get an exception like this:
SEVERE: PriviledgedActionException as:[user] cause:org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied: user=[user], access=WRITE, inode="mapred":root:supergroup:rwxr-xr-x
If you do, this may be solved by adding the following mapred-site.xml:
<configuration>
<property>
<name>mapreduce.jobtracker.staging.root.dir</name>
<value>/user</value>
</property>
</configuration>And then execute the following commands:
stop-mapred.sh
start-mapred.sh
When you now submit your job, it should be picked up by the admin page over at :50030. However, it will most probably fail and the log will be telling you something like:
java.lang.ClassNotFoundException: com.pcbje.hadoopjobs.MyFirstJob$MyFirstMapper
In order to fix this, you have to ensure that all dependencies of the submitted job are available to the JobTracker. This can be achieved by exporting the project in as a runnable jar, and then execute something like:
java -jar myfirstjob-jar-with-dependencies.jar /input/path /output/path
If your user has the appropriate permissions to the input and out directory on HDFS, the job should now run successfully. This can be verified in the console and on the administration panel.
Manually exporting runnable jars requires a lot of clicks in IDEs such as Eclipse. If you are using Maven, you can tell it to build the jar with its dependencies (See
this answer for details). This would make the process a whole lot easier.Finally, to make it even easier, place a tiny bash-script in the same folder as pom.xml for building the maven project and executing the jar:
#!/bin/sh
mvn assembly:assembly
java -jar $1 $2 $3
After making the script executable, you can build and submit the job with the following command:
./build-and-run-job target/myfirstjob-jar-with-dependencies.jar /input/path
如果是在WINDOWS的ECLIPSE中,運行HBASE的MAPREDUCE,會出現異常,這是由于默認運行MAPREDUCE任務是在本地運行,而由于會建立文件賦權限是按照UNIX的方式進行,因此會報錯:
java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: Cannot run program "ls": CreateProcess error=2, 解決辦法是將任務發到運程主機,通常是LINUX上運行,在hbase-site.xml中加入:
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>同時需把HDFS的權限機制關掉:
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>另外由于是在遠程上執行任務,自定義的類文件,如Maper/Reducer等需打包成jar文件上傳,具體見方案:
Hadoop作業提交分析(五)
http://www.cnblogs.com/spork/archive/2010/04/21/1717592.html
研究了好幾天,終于搞清楚,CONFIGUARATION就是JOB的配置信息,遠程JOBTRACKER就是以此為參數構建JOB去執行,由于遠程主機并沒有自定義的MAPREDUCE類,需打成JAR包后,上傳到主機處,但無需每次都手動傳,可以代碼設置:
conf.set("tmpjars", "d:/aaa.jar");另注意,如果在WINDOWS系統中,文件分隔號是“;”,生成的JAR包信息是以“;”間隔的,在遠程主機的LINUX上是無法辨別,需改為:
System.setProperty("path.separator", ":");參考文章:
http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html使用hadoop eclipse plugin提交Job并添加多個第三方jar(完美版)
http://heipark.iteye.com/blog/1171923
1.在host中加入master 127.0.0.1
2.實現無需密碼登錄ssh
3.hadoop配置文件
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<!--
<property>
<name>dfs.name.dir</name>
<value>/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data/hdfs-data-name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data/hdfs-data</value>
</property>
-->
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
<property>
<name>mapred.tasktracker.tasks.maximum</name>
<value>8</value>
<description>The maximum number of tasks that will be run simultaneously by a
a task tracker
</description>
</property>
</configuration>
master
4. 格式化namenode
5. 啟動hadoop
6. hbase配置文件
hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
* Copyright 2010 The Apache Software Foundation
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/Users/paul/Documents/PAUL/DOWNLOAD/SOFTWARE/DEVELOP/HADOOP/hadoop-tmp-data
*/
-->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value><!--單機配這個-->
</property>
</configuration>
7. 啟動hbase
1、JVM的啟動參數
我是這樣設置的:
java -Xmx1024m -Xms1024m -Xss128k -XX:NewRatio=4 -XX:SurvivorRatio=4 -XX:MaxPermSize=16m
啟動tomcat之后,使用 jmap -heap `pgrep -u root java`,得到如下信息:
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 1073741824 (1024.0MB)
NewSize = 1048576 (1.0MB)
MaxNewSize = 4294901760 (4095.9375MB)
OldSize = 4194304 (4.0MB)
NewRatio = 4
SurvivorRatio = 4
PermSize = 12582912 (12.0MB)
MaxPermSize = 16777216 (16.0MB)
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 178913280 (170.625MB)
used = 51533904 (49.14656066894531MB)
free = 127379376 (121.47843933105469MB)
28.80384508070055% used
Eden Space:
capacity = 143130624 (136.5MB)
used = 51533904 (49.14656066894531MB)
free = 91596720 (87.35343933105469MB)
36.00480635087569% used
From Space:
capacity = 35782656 (34.125MB)
used = 0 (0.0MB)
free = 35782656 (34.125MB)
0.0% used
To Space:
capacity = 35782656 (34.125MB)
used = 0 (0.0MB)
free = 35782656 (34.125MB)
0.0% used
tenured generation:
capacity = 859045888 (819.25MB)
used = 1952984 (1.8625106811523438MB)
free = 857092904 (817.3874893188477MB)
0.22734338494383202% used
Perm Generation:
capacity = 12582912 (12.0MB)
used = 6656024 (6.347679138183594MB)
free = 5926888 (5.652320861816406MB)
52.897326151529946% used
------------------------------------------華麗的分割線---------------------------------------
-Xmx1024m -Xms1024m -Xss128k -XX:NewRatio=4 -XX:SurvivorRatio=4 -XX:MaxPermSize=16m
-Xmx1024m 最大堆內存為 1024M
-Xms1024m 初始堆內存為 1024M
-XX:NewRatio=4
則 年輕代:年老代=1:4 1024M/5=204.8M
故 年輕代=204.8M 年老代=819.2M
-XX:SurvivorRatio=4
則年輕代中 2Survivor:1Eden=2:4 204.8M/6=34.13333333333333M
故 Eden=136.5333333333333M 1Suivivor=34.13333333333333M
用 jmap -heap <pid>
查看的結果 與我們計算的結果一致
-----------------------------------華麗的分割線-------------------------------------------
3、編寫測試頁面
在網站根目錄里新建頁面perf.jsp,內容如下:
<%intsize = (int)(1024 * 1024 * m);byte[] buffer = new byte[size];Thread.sleep(s);%>
注:m值用來設置每次申請內存的大小,s 表示睡眠多少ms
4、使用jstat來監控內存變化
這里使用 jstat -gcutil `pgrep -u root java` 1500 10
再解釋一下,這里有三個參數:
·pgrep -u root java --> 得到java的進程ID號
·1500 --> 表示每隔1500ms取一次數據
·10 --> 表示一共取10次數據
5、用ab來進行壓測
壓測的命令:[root@CentOS ~]# ab -c150 -n50000 "http://localhost/perf.jsp?m=1&s=10"
注:這里使用150個線程并發訪問,一共訪問50000次。
默認情況下你可以使用 http://localhost:8080/perf.jsp?m=1&s=10 來訪問。
--------------------------------------------華麗的分割線----------------------------------------
下面開始進行實驗:
·先啟動Java內存的監聽:
[root@CentOS ~]# jstat -gcutil 8570 1500 10
·在開啟一個終端,開始壓測:
[root@CentOS ~]# ab -c150 -n50000 "http://localhost/perf.jsp?m=1&s=10"
兩個命令結束之后的結果如下:
jstat:
[root@CentOS ~]# jstat -gcutil 8570 1500 10
S0 S1 E O P YGC YGCT FGC FGCT GCT
0.06 0.00 53.15 2.03 67.18 52 0.830 1 0.218 1.048
0.00 0.04 18.46 2.03 67.18 55 0.833 1 0.218 1.052
0.03 0.00 28.94 2.03 67.18 56 0.835 1 0.218 1.053
0.00 0.04 34.02 2.03 67.18 57 0.836 1 0.218 1.054
0.04 0.00 34.13 2.03 67.18 58 0.837 1 0.218 1.055
0.00 0.04 38.62 2.03 67.18 59 0.838 1 0.218 1.056
0.04 0.00 8.39 2.03 67.18 60 0.839 1 0.218 1.058
0.04 0.00 8.39 2.03 67.18 60 0.839 1 0.218 1.058
0.04 0.00 8.39 2.03 67.18 60 0.839 1 0.218 1.058
0.04 0.00 8.39 2.03 67.18 60 0.839 1 0.218 1.058
結果簡單解析:
可以看到JVM里S0和S1始終有一個是空的,Eden區達到一定比例之后就會產生Minor GC,由于我這里的Old Generation 區設置的比較大,所以沒有產生Full GC。
ab
[root@CentOS ~]# ab -c150 -n50000 "http://localhost/perf.jsp?m=1&s=10"
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Finished 50000 requests
Server Software: Apache/2.2.3
Server Hostname: localhost
Server Port: 80
Document Path: /perf.jsp?m=1&s=10
Document Length: 979 bytes
Concurrency Level: 150
Time taken for tests: 13.467648 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Non-2xx responses: 50005
Total transferred: 57605760 bytes
HTML transferred: 48954895 bytes
Requests per second: 3712.60 [#/sec] (mean)
Time per request: 40.403 [ms] (mean) #平均請求時間
Time per request: 0.269 [ms] (mean, across all concurrent requests)
Transfer rate: 4177.05 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 46.5 0 3701
Processing: 10 38 70.3 36 6885
Waiting: 3 35 70.3 33 6883
Total: 10 39 84.4 37 6901
Percentage of the requests served within a certain time (ms)
50% 37
66% 38
75% 39
80% 39
90% 41
95% 43
98% 50
99% 58
100% 6901 (longest request)