<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    from:http://zeroturnaround.com/rebellabs/5-command-line-tools-you-should-be-using/

    Working on the command line will make you more productive, even on Windows!

    There’s an age-old discussion between the usability and friendliness of GUI programs, versus the simplicity and productivity of CLI ones. But this is not a holy war I intend to trigger or fuel. In the past, RebelLabs has discussed built-in JDK tools and received amazing feedback, so I feel an urge to share more non-JDK command line tools which I simply couldn’t live without.

    I do firmly believe every developer who’s worth their salt should have at least some notion of how to work with the command line, if only because some tools only exist in CLI variants. Plus, because geek++!

    All other nuances that people pour words over, like the choice of operating system (OSX of course, they have beautiful aluminum cases), your favorite shell (really it should be ZSH), or the preference of Vim over Emacs (unless you have more fingers than usual) are much less relevant. OK, that was a little flamewar-like, but I promise that will be the last of it!

    So my advice would be that you should learn how to use tools at the command line, as it will have a positive impact on your happiness and productivity at least for half a century!

    Anyway, in this post I want to share with you four-five lesser-known yet pretty awesome command line gems. As an added bonus I will also advise the proper way to use shell under Windows, which is a pretty valuable bit of knowledge in itself.

    The reason I wanted to write this post in the first place is because I really enjoy using these tools myself, and want to learn about other command line tools that I don’t yet know about. So please, awesome reader, leave me a comment with your favourite CLI tools — that’d be grand! Now, assuming we all have a nice, workable shell, let’s go over some neat command line tools that are worth hearing about.

    0. HTTPie

     

    The first on my list is a tool called HTTPie. Fear not, this tool has nothing to do with Internet Explorer, fortunately. In essence HTTPie is a cURL wrapper, the utility that performs HTTP requests from the command line. HTTPie adds nice features like auto-formatting and intelligent colour highlighting to the output making it much more readable and useful to the user. Additionally, it takes a very human-centric approach to its execution, not asking you to remember obscure flags and options. To perform an HTTP GET, you simply run http, to post you http POST, what can be easier or more beautiful?

    sample httpie output

    Almost all command line tools are conveniently packaged for installation. HTTPie is no exception, to install it, run the following command.

    • On OSX use homebrew, the best package manager to be found on OSX: brew install httpie
    • All other platforms, using Python’s pip: pip install --upgrade httpie

    I personally use HTTPie a lot when developing a REST API, as it allows me to very simply query the API, returning nicely structured, legible data. Without doubt this tool saves me serious work and frustration. Luckily the usage does not stop at just REST APIs. Generally speaking, all interactions over HTTP, whether it’s inputting or outputting data, can be done in a very human-readable format.

    I’d encourage you to take a look at the website, spend the 10 seconds it takes to install and give it a go yourself. Try to get the source of any website and be amazed by the output.

    How unstoppable you can be with proper tools

    Protip: Combine the HTTPie greatness with jq for command line JSON manipulation or pup for HTML parsing and you’ll be unstoppable!

    1. Icdiff

     

    At ZeroTurnaround I am blessed to work with Mercurial, a very nice and easy to use VCS. On OSX the excellent GUI program SourceTree makes working with Mercurial an absolute breeze, even with the more complex stuff. Unfortunately I like to keep the number of programs/tabs/windows I have open to an absolute minimum. Since I always have a terminal window opened it makes sense to use the CLI.

    All was fine and well apart from one single pitfall in my setup. This was a feature I could barely go without: side-by-side diffs. Introducing icdiff. Of all the tools I use each day, this is the one I most appreciate. Let’s take a look at a screenshot:

    example of icdiff at work

    By itself, icdiff is an intelligent Python script, smart at detecting which of the differences are modifications, additions or deletions. The excellent colour highlighting in the tool makes it easy to distinguish between the three types of differences mentioned.

    To get going with icdiff, do the following:

    • Via homebrew once again: brew install icdiff
    • Manually grab the Python script from the site above and put it in your PATH

    When you couple icdiff with a VCS such as Mercurial, you’ll see it really shine. To fully integrate it, you’ll need to complete two more configuration steps, already documented here. The gist of the instructions is to first add a wrapping script that allows the one-by-one file diff of icdiff to operate on entire directories. Secondly you need to config your VCS to actually use icdiff. The link above shows the details of configuring it for Mercurial, but porting this to Git shouldn’t be so hard.

    2. Pandoc

     

    In the spirit of “practice what you preach” I set out to write this entire blogpost via a CLI. Most of the work was done using MacVim, in iTerm2 on OSX. All of the text was written and formatted using standard MarkDown syntax. The only issue to arise here is that it’s pretty difficult sometimes to accurately guess how your eventual text will come out.

    This is where the next tool comes in: Pandoc. A program so powerful and versatile it’s a wonder it was GPL’d in the first place. Let’s take a look at how we might use it.

    pandoc -f markdown -t html blogpost.md &gt; blogpost.html 

    Think of a markup format, any markup format. The chances are, Pandoc can convert it from one format to any other. For example, I’m writing this blogpost in Vim and use Pandoc to convert it from MarkDown into HTML, to actually see the final result. It’s a nice thing, needing only my terminal and a browser, rather than being tied to a particular online platform, fully standalone and offline.

    Don’t let yourself be limited by simple formats like MarkDown though, give it some docx files, or perhaps some LaTeX. Export into PDFepub, let it handle and format your citations. The possibilities are endless.

    Once again brew install pandoc does the trick. Did I mention I really like Homebrew? Maybe that should have made my tool list! Anyway, you get the gist of what that does!

    3. Moreutils

     

    The next tool in this post is actually a collection of nifty tools that didn’t make it into coreutils:Moreutils. It should be obtainable under moreutils in about any distro you can think of. OSX users can get all this goodness by brewing it like I did throughout this post:

    brew install moreutils 

    Here are a list of the included programs with short descriptions:

    • chronic: runs a command quietly unless it fails
    • combine: combine the lines in two files using boolean operations
    • ifdata: get network interface info without parsing ifconfig output
    • ifne: run a program if the standard input is not empty
    • isutf8: check if a file or standard input is utf-8
    • lckdo: execute a program with a lock held
    • mispipe: pipe two commands, returning the exit status of the first
    • parallel: run multiple jobs at once
    • pee: tee standard input to pipes
    • sponge: soak up standard input and write to a file
    • ts: timestamp standard input
    • vidir: edit a directory in your text editor
    • vipe: insert a text editor into a pipe
    • zrun: automatically uncompress arguments to command

    As the maintainer hints himself sponge is perhaps the most useful tool, in that you can easily sponge up standard input into a file. However, it is not difficult to see the advantages of some of the other commands such as chronicparallel and pee.

    My personal favourite though, and the ultimate reason to include this collection, is without doubtvipe.

    You can literally intercept your data as it moves from command to command through the pipe. Even though this is not a useful tool in your scripts, it can be extremely helpful when running commands interactively. Instead of giving you a useful example I will leave you with a modified fortune!

    sample vipe command

    4. Babun

     

    These days the Windows OS comes packaged with two different shells: its classic command line, and PowerShell. Let’s completely ignore those and have a look at the proper way or running command line tools under Windows: Babun! The reason this project is amazingly awesome is because it actually brings all the goodness of the \*NIX command line into Windows in a completely pre-configured no-nonsense manner.

    Moreover, its default shell is my beloved ZSH, though it can very easily be changed to use Bash, if that’s your cup of tea. With ZSH it also packages the highly popular oh-my-zsh framework, which combines all the benefits of ZSH with no config whatsoever thanks to some very sane defaults and an impressive plugin system.

    By default Babun is loaded with more applications than any sane developer may ever need, and is thus a rather solid 728 MBs(!) when expanded. In return you get essentials like Vim pre-installed and ready to go!

    screenshot of babun termina;

    Under the hood Babun is basically a fancy wrapper around Cygwin. If you already have a Cygwin install you can seamlessly re-use that one. Otherwise it will default to its own packaged Cygwin binaries, and supply you with access to those.

    Some more points of interest are that Babun provides its own package manager, which again wraps around Cygwin’s, and an update mechanism both for itself and for oh-my-zsh. The best thing is that no actual installation is required, nor is the usual requirement of admin rights necessary, so for those people on a locked down PC this may be just the thing they need!


    I hope this small selection of tools gave you at least one new cool toy to play with. As for me, it seems it is time to look at command line browsers before writing a following blogpost, to fully ditch the world of the GUI!

    By all means fire up any comments or suggestions that you have, and let’s get some tool-sharing going on. If you just want to chat just ping RebelLabs on Twitter: @ZeroTurnaround, they are pretty chatty, but great smart people.

    posted @ 2016-04-06 14:49 小馬歌 閱讀(283) | 評(píng)論 (0)編輯 收藏
     
         摘要: 本文由 ImportNew - hejiani 翻譯自 java-performance。歡迎加入翻譯小組。轉(zhuǎn)載請(qǐng)見文末要求。JMH是新的microbenchmark(微基準(zhǔn)測(cè)試)框架(2013年首次發(fā)布)。與其他眾多框架相比它的特色優(yōu)勢(shì)在于,它是由Oracle實(shí)現(xiàn)JIT的相同人員開發(fā)的。特別是我想提一下Aleksey Shipilev和他優(yōu)秀的博...  閱讀全文
    posted @ 2016-04-06 14:19 小馬歌 閱讀(423) | 評(píng)論 (0)編輯 收藏
     
         摘要: 花了一下午時(shí)間,總算全部搞定。時(shí)間主要都花費(fèi)在下載jar包上,雖然開了VPN還是下載慢,沒(méi)有VPN的話,真心要奔潰的。這期間有太多坑了,所以寫這篇文章, 一是記錄下,二是方便大家查閱。本人的系統(tǒng)環(huán)境為什么要說(shuō)系統(tǒng)環(huán)境呢?不同的環(huán)境有不同的設(shè)置方法,但看了這篇文章后,可以舉一反三,在其他環(huán)境設(shè)置也沒(méi)什么問(wèn)題。OS: OS X EI Capitan 10.11IDE: IntelliJ IDEA 14...  閱讀全文
    posted @ 2016-04-06 10:11 小馬歌 閱讀(1973) | 評(píng)論 (0)編輯 收藏
     
    http://zeroturnaround.com/rebellabs/monadic-futures-in-java8/

    Few people will argue that asynchronous computation is cool and useful. In fact, the wholereactive programming idea is based on asynchronous computations being possible. Well, there’s more than that, but the core idea is to allow data and events to flow through your system and do something with the results when they become available.

    So let’s look at an example of asynchronous function that everyone has seen and many have written themselves.

    $("#book").fadeIn("slow",    function() {    console.log(“hurray”);   }); 

    This piece of JavaScript code takes a book element and fades it in. When fading is complete a callback function is called and “hurray” string appears in the console. All is well and good in this trivial case, but once your system grows you can find yourself writing more and more of these nested callbacks.

    Callbacks are a common way of dealing with asynchronous or delayed actions. They are not the best option though; the problem with callbacks is that they tend to chain forever, callbacks for callbacks for callbacks, until you find yourself in a complete mess and every change in the code becomes extremely painful and slow.

    Maybe there are other ways to organize asynchronous code? In fact, there are: all you need to do is just tweak the perspective a bit. Imagine, if you had a type to represent a result of an async computation. It would be awesome, and your code would pass it around like every other value and be flat, fluid and readable.

    Well, why don’t we build it!

    When we’re done, we’ll have a monadic type Promise written in Java 8 that will make our asynchronous code wonderful. It’s not like it wasn’t ever done before, but I want to lead you through the process and help you understand what’s happening and why. If you are lazy or just prefer starting from code, check out the github repo.

    Getting to love monads in 9.5 minutes

    Oh, monads! Every programmer worth their morning coffee has written about them. Monads are what functional programming adepts love, use and praise. And there are thousands of tutorials and posts describing the concept.

    So if you know everything there is to know about monads and want to get a closer look onto more interesting things, scroll down to the code below. Otherwise, bear with me just ten minutes, maybe this will become your go-to explanation about what a monad is.

    A monad is a type, that represents a context of computation. I bet you’ve heard that before, but have you thought about what it means?

    First of all, a monad doesn’t specify what is happening, that’s the responsibility of the computation within the context. A monad says what surrounds the computation that is happening.

    Now, if you want an image reference to help you out, you can think of a monad as a bubble. Some people prefer a box, but a box is something concrete so a bubble works better for me.
    A lovely bubble with a cure dragon-ish creature inside
    These monad-bubbles have two properties:

    • a bubble can surround something
    • a bubble can receive instructions about what should it do with a surrounded thing

    The surrounding part is easy to model in a programming language. Just take something and return a bubble! A constructor or a factory method comes to mind immediately here. Let’s look at how it is formalized. I’m assuming that you have some knowledge of Haskell notation (which you probably should have anyway). So the function that takes something and returns a monad is usually called pure or return:

    return :: a -> m a 

    Or in Java, if we can have some Monad class already.

    public class Monad<T> {    public Monad(T t) {     …   } }

    See that was easy. In fact, we’re halfway there. Another thing we must add is the ability to receive instructions for working with this value T eaten by our bubble.

    What will help us is a bind function, which takes some form of an action and returns a different monad bubble that wraps this action executed on whatever was previously in the bubble.

    For the sake of completeness, here is how it looks in Haskell.

    (>>=)  :: m a -> (a -> m b) -> m b 

    So this bind function takes a monad over a(m a) and a function from a, and returns a different monad. In Java, we’ll have this definition as follows.

    public class Monad<T> {    public abstract <V> Monad<V> bind(Function<T, Monad<V>> f); }

    That will complete our generic definition of monads so we can proceed with an implementation.

    Wait, what? I can have my monads in Java?

    First of all, there are many different types of monads. In that sense, a monad is more like an interface in Java terms. There is a List monad, a Maybe monad, an IO monad (for languages that are very pure and cannot allow themselves to have normal IO), etc.

    We will focus on creating a specific monad in Java, more specifically in Java 8. There is a good reason as to why we chose Java 8, since previously we found out that a monad will have to manipulate functions, which is really not that enjoyable in pre-lambda versions of Java.  However, Java 8 introduces lambdas and first-class methods, so it will be much more pleasant to work with them.

    Your homemade Promise implementation

    Here we go, now we’ve established our goal to have a monadic type to represent async computations. We’ve got our tools, namely Java 8, and we are ready to hack.

    What we want to have is a Promise class that represents a result of asynchronous computation, either successful or erroneous.

    Let’s pretend that we already have some Promise class that accepts callbacks to execute when the main computation is finished. Luckily, we don’t have to pretend very much, there are many implementations of that available: Akka’s FuturePlay’s promise and so forth.

    For this post I’m using the one from Play Framework, in which case instances of Promise get redeemed when some thread calls invoke() or invokeWithException() methods. It also accepts callbacks in a form of Play’s Promise specific Action class arguments. Obviously, Promise has constructors already, but not only do I want to create new instances of Promise, I also want to mark them completed with a value immediately. Here is how I can do it.

    public static <V> Promise<V> pure(final V v) {     Promise<V> p = new Promise<>();     p.invoke(v);     return p;   }

    The returned Promise is already redeemed and is ready to provide us with a result of the computation, which is precisely the given value.

    The bind implementation will look like something below. It takes a function and adds that as a callback to this instance. A callback will get a result of this computation and apply given function to it. Whatever that function application returns or throws is used to redeem the resultingPromise.

    public <R> Promise<R> bind(final Function<V, Promise<R>> function) {     Promise<R> result = new Promise<>();      this.onRedeem(callback -> {       try {         V v = callback.get();         Promise<R> applicationResult = function.apply(v);         applicationResult.onRedeem(applicationCallback -> {           try {             R r = applicationCallback.get();             result.invoke(r);           }           catch (Throwable e) {             result.invokeWithException(e);           }         });       }       catch (Throwable e) {         result.invokeWithException(e);       }     });     return result;   } 

    Both applying the given function and getting a result from this are wrapped into the try-catch blocks, so exceptions are propagated to the resulting instance of Promise, just as one might expect.

    With these two constructs, it’s very easy to chain asynchronous computations while avoiding going deeper and deeper into the callback hell. In the following synthetic example, we do exactly that.

    public static void example1()                      throws ExecutionException, InterruptedException {     Promise<String> promise = Async.submit(() -> {       String helloWorld = "hello world";       long n = 500;       System.out.println("Sleeping " + n + " ms example1");       Thread.sleep(n);       return helloWorld;     });     Promise<Integer> promise2 = promise.bind(string ->               Promise.pure(Integer.valueOf(string.hashCode())));     System.out.println("Main thread example2");     int hashCode = promise2.get();     System.out.println("HashCode = " + hashCode);   }

    That is basically it. We’ve implemented a monadic type Promise to represent a result of an async action.

    Production-ready completable future

    For those of you who have beared with me this far, I just want to say some final words about the quality of this implementation. Naturally, the above-mentioned GitHub repository has some tests that are proving that in some contexts, this might all work. However, I wouldn’t recommend using those Promises in production.

    One reason for that is that Java 8 already contains a class that represents a result of async computation and is monadic…, welcome, CompletableFuture!

    It does exactly what we want it to do and features several methods that allow you to bind a function to the result of an existing computation. Moreover, it provides methods to apply a function or a consumer, which is a void function by the way, or a plain old Runnable.

    On top of that, methods that end on *Async will execute this function asynchronously using a common ForkJoin executor. Otherwise, you can also supply an executor of your own liking.

    Conclusion

    Hopefully, this post shed some light onto what a monad is and next time you are about write a callback, you might want to consider a different approach.

    In the post above we’ve looked at what monads are and how one can implement monadic classes in Java 8. Monads are great help in organizing data flow through your code and we’ve shown it with an example of Promise monad that represents results of an asynchronous computation. All the code from this blogpost is available for pondering in the Github repo.

    Stay tuned for my next post, in which I plan to cover how to use the javaflow library to implement asynchronous awaiting for the promise to return a result. So you can get even more reactive :-)


    Want to learn more about what rocks in Java 8? Check out Java 8 Revealed: Lambdas, Default methods and Bulk Data Operations by Anton Arhipov

    Get the PDF

    posted @ 2016-04-05 17:51 小馬歌 閱讀(204) | 評(píng)論 (0)編輯 收藏
     
         摘要: from:https://engineering.linkedin.com/play/play-framework-async-io-without-thread-pool-and-callback-hellUnder the hood, LinkedIn consists of hundreds of services that can be evolved and scaled indepen...  閱讀全文
    posted @ 2016-04-05 17:48 小馬歌 閱讀(427) | 評(píng)論 (0)編輯 收藏
     
    http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html
    posted @ 2016-04-05 17:47 小馬歌 閱讀(168) | 評(píng)論 (0)編輯 收藏
     
    from:http://mmcgrana.github.io/2010/07/threaded-vs-evented-servers.html

    Threaded vs Evented Servers

    July 24 2010

    Broadly speaking, there are two ways to handle concurrent requests to a server. Threadedservers use multiple concurrently-executing threads that each handle one client request, while evented servers run a single event loop that handles events for all connected clients.

    To chose between the threaded and evented approaches you need to consider the load profile of the server. This post describes a simple mathematical model for reasoning about these load profiles and their implications for server design.

    Suppose that requests to a server take c CPU milliseconds and w wall clock milliseconds to execute. The CPU time is spent actively computing on behalf of the request, while the wall clock time is the total time including that time spent waiting for calls to external resources. For example, a web application request might take 5 ms of CPU time c and 95 ms waiting for a database call for a total wall time w of 100 ms. Let’s also say that a threaded version of the server can maintain up to t threads before performance degrades because of scheduling and context-switching overhead. Finally, we’ll assume single-core servers.

    If a server is CPU bound then it will be able to respond to at most

    (/ 1000 c) 

    requests per second. For example, if each requests takes 2 ms of CPU time then the CPU can only handle

    (/ 1000 2) => 500 

    requests per second.

    If the server is thread bound then it can handle at most

    (* t (/ 1000 w)) 

    requests per second. This expression is similar to the one for CPU time, but here we multiply the result by t to account for the t concurrent threads.

    The throughput of a threaded server is the minimum of the CPU and thread bounds since it is subject to both constraints. An evented server is not subject to the thread constraint since it only uses one thread; its throughput is given by the CPU bound. We can express this as follows:

    (defn max-request-rate [t c w]   (let [cpu-bound    (/ 1000 c)         thread-bound (* t (/ 1000 w))]     {:threaded (min cpu-bound thread-bound)      :evented  cpu-bound})) 

    Now we’ll consider some different types of servers and see how they might perform with threaded and evented implementations.

    For the examples below I’ll use a t value of 25. This is a modest number of threads that most threading implementations can handle.

    Let’s start with a classic example: an HTTP proxy server. These servers require very little CPU time, so say c is 0.1 ms. Suppose that the downstream servers can receive the relay within milliseconds for a wall time w of, say, 10 ms. Then we have

    (max-request-rate 25 0.1 10) => {:threaded 2500, :evented 10000} 

    In this case we expect a threaded server to be able to handle 2500 requests per second and an evented server 10000 requests per second. The higher performance of the evented server implies that the thread bound is limiting for the threaded server.

    Another familiar example is the web application server. Let’s first consider the case where we have a lightweight app that does not access any external resources. In this case the request parsing and response generation might take a few milliseconds; say c is 2 ms. Since no blocking calls are made this is the value of w as well. Then

    (max-request-rate 25 2 2) => {:threaded 500, :evented 500} 

    Here the threaded server performs as well as the evented server because the workload is CPU bound.

    Suppose we have a more heavyweight app that is making calls to external resources like the filesystem and database. In this case the amount of CPU time will be somewhat larger that the previous case but still modest; say c is 5 ms. But now that we are waiting on external resources we should expect a w value of, say, 100 ms. Then we have

    (max-request-rate 25 5 100) => {:threaded 200, :evented 200} 

    Even though we are making a lot of blocking calls, the workload is still CPU bound and the threaded and evented servers will therefore perform comparably.

    Suppose now that we are implementing a background service such as an RSS feed fetcher that makes high-latency requests to external services and then performs minimal processing of the results. In this case c may be quite low, say 2 ms, but w will be high, say 250 ms. Then

    (max-request-rate 25 2 250) => {:threaded 100, :evented 500} 

    Here an evented server will perform better. The CPU load is sufficiently low and the external resource latency sufficiently high that the blocking external calls limit the threaded implementation.

    Finally, consider the case of long polling clients. Here clients establish a connection to the server and the server responds only when it has a message it wants to send to the client. Suppose that we have a lightweight app such that c is 1 ms, but that response messages are sent to the client after 10 seconds such that the w value is 10000 ms. Then

    (max-request-rate 25 1 10000) => {:threaded 2.5, :evented 1000} 

    If the server were really limited to 25 threads and each client required its own thread, we could only allow 2.5 new connections per second if we wanted to avoid exceeding the thread allocation. An evented server on the other hand could saturate the CPU by accepting 1000 requests per second.

    Even if we increase the maximum number of threads t by an order of magnitude to 250, the evented approach still fares better:

    (max-request-rate 250 1 10000) => {:threaded 25, :evented 1000} 

    Indeed, a threaded server would need to maintain 10000 threads in order to be able to accept requests at the rate of the evented server.

    Now that we have seen some specific examples of the model we should step back and note the patterns. In general, an evented architecture becomes more favorable as the ratio of wall time w to CPU time c increases, i.e. as proportionally more time is spent waiting on external resources. Also, the viability of a threaded architecture depends on the strength of the underlying threading implementation; the higher the thread threshold t, the more wait time can be tolerated before eventing becomes necessary.

    In addition to the quantitative performance implications captured by this model, there are several qualitative factors that influence the suitability of threaded and evented architectures for particular servers.

    One factor is the fit of the server architecture to the work that the server is doing internally. For example, proxying is well suited to evented architectures because the work being done is fundamentally evented: upon receiving an input chunk from the client the chunk is relayed to a downstream server. In contrast, the business logic implemented by web applications is more naturally described in a synchronous style. The callbacks required by an evented architecture become unwieldy in complex application code.

    Another consideration is memory coordination and consistency. Evented servers executing in a single event loop do not need to worry about the correctness and performance implications of maintaining consistent shared memory, but this may be a problem for threaded servers. Threaded servers therefore attempt to minimize memory shared among threads. This approach works well for the servers that we discussed above - proxies, web applications, background workers, and long poll endpoints - as none of them need to share state internally across client sessions. But fundamentally stateful servers like caches and databases cannot avoid this problem.

    The threaded approach can be a non-starter if the underlying platform does not support proper threading. In these cases blocking calls to external resources prevent the process from using the CPU in other threads, even if the blocker is not itself using the CPU. C Ruby falls into this category. In these cases t is effectively 1, making evented architectures relatively more appealing.

    In the other extreme, the assumption of t being 25 or even 250 may be too modest for some platforms. These low t values are an an artifact of threading implementations and not intrinsic to the threading model itself. More scalable threading implementations make threaded servers viable for higher w to c ratios.

    An evented approach can be compromised by a lack of evented libraries for the platform. For evented servers to perform optimally, all external resources must be accessed through nonblocking libraries. Such libraries are not always available, especially on platforms that have typically used threaded/blocking models like the JVM and C Ruby. Fortunately this situation is improving as developers publish more nonblocking libraries in response to the demand from implementors of evented servers. Indeed, the requirement of pervasive evented libraries for optimal performance is one reason that node.js is so compelling for building evented servers.

    posted @ 2016-04-05 17:47 小馬歌 閱讀(351) | 評(píng)論 (0)編輯 收藏
     

    Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle’s code is protocol agnostic, simplifying the implementation of new protocols.

    Finagle uses a cleansimple, and safe concurrent programming model, based onFutures. This leads to safe and modular programs that are also simple to reason about.

    Finagle clients and servers expose statistics for monitoring and diagnostics. They are also traceable through a mechanism similar to Dapper‘s (another Twitter open source project, Zipkin, provides trace aggregation and visualization).

    The quickstart has an overview of the most important concepts, walking you through the setup of a simple HTTP server and client.

    A section on Futures follows, motivating and explaining the important ideas behind the concurrent programming model used in Finagle. The next section documents Services & Filters which are the core abstractions used to represent clients and servers and modify their behavior.

    Other useful resources include:

    posted @ 2016-04-05 17:46 小馬歌 閱讀(181) | 評(píng)論 (0)編輯 收藏
     
         摘要: from:http://www.infoq.com/cn/articles/hadoop-ten-years-interpretation-and-development-forecast編者按:Hadoop于2006年1月28日誕生,至今已有10年,它改變了企業(yè)對(duì)數(shù)據(jù)的存儲(chǔ)、處理和分析的過(guò)程,加速了大數(shù)據(jù)的發(fā)展,形成了自己的極其火爆的技術(shù)生態(tài)圈,并受到非常廣泛的應(yīng)用。在2016年Hadoop十歲...  閱讀全文
    posted @ 2016-03-29 16:59 小馬歌 閱讀(234) | 評(píng)論 (0)編輯 收藏
     
    Dubbo是阿里巴巴內(nèi)部的SOA服務(wù)化治理方案的核心框架,每天為2000+ 個(gè)服務(wù)提供3,000,000,000+ 次訪問(wèn)量支持,并被廣泛應(yīng)用于阿里巴巴集團(tuán)的各成員站點(diǎn)。Dubbo自2011年開源后,已被許多非阿里系公司使用。 

    項(xiàng)目主頁(yè):http://alibaba.github.io/dubbo-doc-static/Home-zh.htm 

    為了使大家對(duì)該框架有一個(gè)深入的了解,本期我們采訪了Dubbo團(tuán)隊(duì)主要開發(fā)人員之一梁飛。 

    ITeye期待并致力于為國(guó)內(nèi)優(yōu)秀的開源項(xiàng)目提供一個(gè)免費(fèi)的推廣平臺(tái),如果你和你的團(tuán)隊(duì)希望將自己的開源項(xiàng)目介紹給更多的開發(fā)者,或者你希望我們對(duì)哪些開源項(xiàng)目進(jìn)行專訪,請(qǐng)告訴我們,發(fā)站內(nèi)短信給ITeye管理員或者發(fā)郵件到webmaster@iteye.com即可。

    先來(lái)個(gè)自我介紹吧!Top

    我叫梁飛,花名虛極,之前負(fù)責(zé)Dubbo服務(wù)框架,現(xiàn)已調(diào)到天貓。 

    我的博客:http://javatar.iteye.com

    Dubbo是什么?能做什么?Top

    Dubbo是一個(gè)分布式服務(wù)框架,以及SOA治理方案。其功能主要包括:高性能NIO通訊及多協(xié)議集成,服務(wù)動(dòng)態(tài)尋址與路由,軟負(fù)載均衡與容錯(cuò),依賴分析與降級(jí)等。 

    可參見:http://alibaba.github.io/dubbo-doc-static/Home-zh.htm

    Dubbo適用于哪些場(chǎng)景?Top

    當(dāng)網(wǎng)站變大后,不可避免的需要拆分應(yīng)用進(jìn)行服務(wù)化,以提高開發(fā)效率,調(diào)優(yōu)性能,節(jié)省關(guān)鍵競(jìng)爭(zhēng)資源等。 

    當(dāng)服務(wù)越來(lái)越多時(shí),服務(wù)的URL地址信息就會(huì)爆炸式增長(zhǎng),配置管理變得非常困難,F(xiàn)5硬件負(fù)載均衡器的單點(diǎn)壓力也越來(lái)越大。 

    當(dāng)進(jìn)一步發(fā)展,服務(wù)間依賴關(guān)系變得錯(cuò)蹤復(fù)雜,甚至分不清哪個(gè)應(yīng)用要在哪個(gè)應(yīng)用之前啟動(dòng),架構(gòu)師都不能完整的描述應(yīng)用的架構(gòu)關(guān)系。 

    接著,服務(wù)的調(diào)用量越來(lái)越大,服務(wù)的容量問(wèn)題就暴露出來(lái),這個(gè)服務(wù)需要多少機(jī)器支撐?什么時(shí)候該加機(jī)器?等等…… 

    在遇到這些問(wèn)題時(shí),都可以用Dubbo來(lái)解決。 

    可參見:Dubbo的背景及需求

    Dubbo的設(shè)計(jì)思路是什么?Top

    該框架具有極高的擴(kuò)展性,采用微核+插件體系,并且文檔齊全,很方便二次開發(fā),適應(yīng)性極強(qiáng)。 

    可參見:開發(fā)者指南 - 框架設(shè)計(jì)

    Dubbo的需求和依賴情況?Top

    Dubbo運(yùn)行JDK1.5之上,缺省依賴javassist、netty、spring等包,但不是必須依賴,通過(guò)配置Dubbo可不依賴任何三方庫(kù)運(yùn)行。 

    可參見:用戶指南 - 依賴

    Dubbo的性能如何?Top

    Dubbo通過(guò)長(zhǎng)連接減少握手,通過(guò)NIO及線程池在單連接上并發(fā)拼包處理消息,通過(guò)二進(jìn)制流壓縮數(shù)據(jù),比常規(guī)HTTP等短連接協(xié)議更快。在阿里巴巴內(nèi)部,每天支撐2000多個(gè)服務(wù),30多億訪問(wèn)量,最大單機(jī)支撐每天近1億訪問(wèn)量。 

    可參見:Dubbo性能測(cè)試報(bào)告

    和淘寶HSF相比,Dubbo的特點(diǎn)是什么?Top

    1.  Dubbo比HSF的部署方式更輕量,HSF要求使用指定的JBoss等容器,還需要在JBoss等容器中加入sar包擴(kuò)展,對(duì)用戶運(yùn)行環(huán)境的侵入性大,如果你要運(yùn)行在Weblogic或Websphere等其它容器上,需要自行擴(kuò)展容器以兼容HSF的ClassLoader加載,而Dubbo沒(méi)有任何要求,可運(yùn)行在任何Java環(huán)境中。 

    2.  Dubbo比HSF的擴(kuò)展性更好,很方便二次開發(fā),一個(gè)框架不可能覆蓋所有需求,Dubbo始終保持平等對(duì)待第三方理念,即所有功能,都可以在不修改Dubbo原生代碼的情況下,在外圍擴(kuò)展,包括Dubbo自己內(nèi)置的功能,也和第三方一樣,是通過(guò)擴(kuò)展的方式實(shí)現(xiàn)的,而HSF如果你要加功能或替換某部分實(shí)現(xiàn)是很困難的,比如支付寶和淘寶用的就是不同的HSF分支,因?yàn)榧庸δ軙r(shí)改了核心代碼,不得不拷一個(gè)分支單獨(dú)發(fā)展,HSF現(xiàn)階段就算開源出來(lái),也很難復(fù)用,除非對(duì)架構(gòu)重寫。 

    3.  HSF依賴比較多內(nèi)部系統(tǒng),比如配置中心,通知中心,監(jiān)控中心,單點(diǎn)登錄等等,如果要開源還需要做很多剝離工作,而Dubbo為每個(gè)系統(tǒng)的集成都留出了擴(kuò)展點(diǎn),并已梳理干清所有依賴,同時(shí)為開源社區(qū)提供了替代方案,用戶可以直接使用。 

    4.  Dubbo比HSF的功能更多,除了ClassLoader隔離,Dubbo基本上是HSF的超集,Dubbo也支持更多協(xié)議,更多注冊(cè)中心的集成,以適應(yīng)更多的網(wǎng)站架構(gòu)。

    Dubbo在安全機(jī)制方面是如何解決的?Top

    Dubbo主要針對(duì)內(nèi)部服務(wù),對(duì)外的服務(wù),阿里有開放平臺(tái)來(lái)處理安全和流控,所以Dubbo在安全方面實(shí)現(xiàn)的功能較少,基本上只防君子不防小人,只防止誤調(diào)用。 

    Dubbo通過(guò)Token令牌防止用戶繞過(guò)注冊(cè)中心直連,然后在注冊(cè)中心上管理授權(quán)。Dubbo還提供服務(wù)黑白名單,來(lái)控制服務(wù)所允許的調(diào)用方。 

    可參見:Dubbo的令牌驗(yàn)證

    Dubbo在阿里巴巴內(nèi)部以及外部的應(yīng)用情況?Top

    在阿里內(nèi)部,除淘系以外的其它阿里子公司,都在使用Dubbo,包括:中文主站,國(guó)際主站,AliExpress,阿里云,阿里金融,阿里學(xué)院,良無(wú)限,來(lái)往等等。 

    開源后,已被:去哪兒,京東,吉利汽車,方正證劵,海爾,焦點(diǎn)科技,中潤(rùn)四方,華新水泥,海康威視,等公司廣泛使用,并不停的有新公司加入,社區(qū)討論及貢獻(xiàn)活躍,得到用戶很高的評(píng)價(jià)。 

    可參見:Dubbo的已知用戶

    在分布式事務(wù)、多語(yǔ)言支持方面,Dubbo的計(jì)劃是什么?Top

    分布式事務(wù)可能暫不會(huì)支持,因?yàn)槿绻皇侵С趾?jiǎn)單的XA/JTA兩階段提交事務(wù),實(shí)用性并不強(qiáng)。用戶可以自行實(shí)現(xiàn)業(yè)務(wù)補(bǔ)償?shù)氖录蚋鼜?fù)雜的分布式事務(wù),Dubbo有很多擴(kuò)展點(diǎn)可以集成。 

    在多語(yǔ)言方面,Dubbo實(shí)現(xiàn)了C++版本,但在內(nèi)部使用面極窄,沒(méi)有得到很強(qiáng)的驗(yàn)證,并且C++開發(fā)資源緊張,沒(méi)有精力準(zhǔn)備C++開源事項(xiàng)。

    Dubbo采用的開源協(xié)議?商業(yè)應(yīng)用應(yīng)該注意哪些事項(xiàng)?Top

    Dubbo采用Apache License 2.0開源協(xié)議,它是一個(gè)商業(yè)友好的協(xié)議,你可以免費(fèi)用于非開源的商業(yè)軟件中。 

    你可以對(duì)它進(jìn)行改造和二次發(fā)布,只要求保留阿里的著作權(quán),并在再發(fā)布時(shí)保留原始許可聲明。 

    可參見:Dubbo的開源許可證

    Dubbo開發(fā)團(tuán)隊(duì)情況?Top

    Dubbo共有六個(gè)開發(fā)人員參與開發(fā)和測(cè)試,每一個(gè)開發(fā)人員都是很有經(jīng)驗(yàn),團(tuán)隊(duì)合作很默契,開發(fā)過(guò)程也很有節(jié)奏,有完善質(zhì)量保障流程。團(tuán)隊(duì)組成: 

    • 梁飛 (開發(fā)人員/產(chǎn)品管理)
    • 劉昊旻 (開發(fā)人員/過(guò)程管理)
    • 劉超 (開發(fā)人員/用戶支持)
    • 李鼎 (開發(fā)人員/用戶支持)
    • 陳雷 (開發(fā)人員/質(zhì)量保障)
    • 閭剛 (開發(fā)人員/開源運(yùn)維)
     
    從左至右:劉超,梁飛,閭剛,陳雷,劉昊旻,李鼎

    可參見:Dubbo的團(tuán)隊(duì)成員

    其他開發(fā)者如何參與?可以做哪些工作?Top

    開發(fā)者可以在Github上fork分支,然后將修改push過(guò)來(lái),我們審核并測(cè)試后,會(huì)合并到主干中。 

    Github地址:https://github.com/alibaba/dubbo 

    開發(fā)者可以在JIRA上認(rèn)領(lǐng)小的BUG修復(fù),也可以在開發(fā)者指南頁(yè)面領(lǐng)取大的功能模塊。 

    JIRA:http://code.alibabatech.com/jira/browse/DUBBO(暫不可用) 

    開發(fā)者指南:http://alibaba.github.io/dubbo-doc-static/Developer+Guide-zh.htm

    Dubbo未來(lái)的發(fā)展計(jì)劃?Top

    Dubbo的RPC框架已基本穩(wěn)定,未來(lái)的重心會(huì)放在服務(wù)治理上,包括架構(gòu)分析、監(jiān)控統(tǒng)計(jì)、降級(jí)控制、流程協(xié)作等等。 

    可參見:http://alibaba.github.io/dubbo-doc-static/Roadmap-zh.htm
    posted @ 2016-03-24 13:21 小馬歌 閱讀(562) | 評(píng)論 (0)編輯 收藏
    僅列出標(biāo)題
    共95頁(yè): First 上一頁(yè) 5 6 7 8 9 10 11 12 13 下一頁(yè) Last 
     
    主站蜘蛛池模板: 无码国产精品一区二区免费16 | 久久久久久曰本AV免费免费| 国产精品亚洲w码日韩中文| 美女被免费网站视频在线| 在线视频免费观看www动漫| 7777久久亚洲中文字幕| 成人无遮挡裸免费视频在线观看| 亚洲va在线va天堂成人| 久久电影网午夜鲁丝片免费| 亚洲午夜精品久久久久久app| 日本高清免费中文字幕不卡| 成人亚洲国产精品久久| 亚洲av无码国产精品色在线看不卡 | 亚洲AV色香蕉一区二区| 无码成A毛片免费| 亚洲jjzzjjzz在线观看| 毛片在线看免费版| 免费国产人做人视频在线观看| 激情吃奶吻胸免费视频xxxx| 国产成人综合亚洲AV第一页 | 亚洲色大成网站www久久九| 国产精品国产免费无码专区不卡| 乱爱性全过程免费视频| 亚洲精品午夜无码专区| 亚洲视频免费一区| 亚洲AV日韩AV永久无码色欲 | 亚洲一区二区三区乱码A| 成av免费大片黄在线观看| 久久久久亚洲精品日久生情| 午夜时刻免费入口| 手机看片国产免费永久| 亚洲天堂一区二区三区| 国产成人高清精品免费软件| 91成人免费观看在线观看| 亚洲天堂一区在线| 亚洲一区二区三区在线播放| 91热久久免费精品99| 亚洲妇女无套内射精| 国产亚洲av片在线观看16女人| 日本精品人妻无码免费大全| 成年免费a级毛片|