国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

理解基于 docker 的現代化的服務發現

_Zhao / 1023人閱讀

摘要:我也覺得非常帶勁兒,蛋是,我今天相信的才是正確答案,爾切,可以在又賤又容易的網絡中使用。工具管理這樣的事情在一個中和能夠發現,爾切互相可以交談。這包括了,一個目錄這個目錄中的注冊,爾切,能夠查和到目錄中的。因為,基于一個簡單的,整體簡化了。

糙譯,[Warning] 繼續閱讀可能會感到不適

人一生不可能踩到同一灘大便,故而,本文會持續修改。


Understanding Modern Service Discovery with Docker

Over the next few posts, I"m going to be exploring the concepts of service discovery in modern service-oriented architectures, specifically around Docker. Many people aren"t familiar with service discovery, so I have to start from the beginning. In this post I"m going to be explaining the problem and providing some historical context around solutions so far in this domain.

我要從頭開始講講 service discovery, 并且要八卦一下歷史。

Ultimately, we"re trying to get Docker containers to easily communicate across hosts. This is seen by some as one of the next big challenges in the Docker ecosystem. Some are waiting for software-defined networking (SDN) to come and save the day. I"m also excited by SDN, but I believe that well executed service discovery is the right answer today, and will continue to be useful in a world with cheap and easy software networking.

最終,我們會做到:Docer containers 輕松跨主機交流。這是這 Docker 生態中被認為是另一個大挑戰。一些觀點期盼 SDN 來實現這一功能。我也覺得 SDN 非常帶勁兒,蛋是,我今天相信 well excuted 的 service discovery 才是正確答案,爾切,可以在又賤又容易的網絡中使用。

What is service discovery?

Service discovery tools manage how processes and services in a cluster can find and talk to one another. It involves a directory of services, registering services in that directory, and then being able to lookup and connect to services in that directory.

Service discovery 工具管理這樣的事情:在一個 cluster 中 processes 和 services 能夠發現,爾切 互相可以交談。這包括了,一個目錄 services , 這個目錄中的注冊 services ,爾切,能夠查 lookup 和 connect 到目錄中的 services。

At its core, service discovery is about knowing when any process in the cluster is listening on a TCP or UDP port, and being able to look up and connect to that port by name.

這當中的核心問題是,service discovery 知道 何時 cluster 中的 任意 process 在監聽 TCP 和 UDP 端口,能夠根據name 查找,鏈接到端口。

Service discovery is a general idea, not specific to Docker, but is increasingly gaining mindshare in mainstream system architecture. Traditionally associated with zero-configuration networking, its more modern use can be summarized as facilitating connections to dynamic, sometimes ephemeral services.

Service discover 是一個 general 的 idea,并不針對 Docker, 蛋是,她開始成為主流的系統 architecture。傳統的體系是關于 零配置網絡,更現代的用途是能夠 summarized as facilitating connections to dynamic, 有時是一些短暫的 services。

This is particularly relevant today not just because of service-oriented architecture and microservices, but our increasingly dynamic compute environments to support these architectures. Already dynamic VM-based platforms like EC2 are slowly giving way to even more dynamic higher-level compute frameworks like Mesos. Docker is only contributing to this trend.

Name Resolution and DNS

You might think, "Looking up by name? Sounds like DNS." Yes, name resolution is a big part of service discovery, but DNS alone is insufficient for a number of reasons.

你也許會想,“以 name 發現,類似 DNS” 是的,name resolution 是 service discovery 的一大塊內容,蛋是 DNS 多帶帶是不夠的。

A key reason is that DNS was originally not optimized for closed systems with real-time changes in name resolution. You can get away with setting TTL"s to 0 in a closed environment, but this also means you need to serve and manage your own internal DNS. What highly available DNS datastore will you use? What creates and destroys DNS records for your services? Are you prepared for the archaic world of DNS RFCs and server implementations?

一個關鍵的原因:DNS 不是針對 封閉的實時改變的系統 而原生優化的。可以調整 TTL 到0,獲得封閉的環境,蛋是,這意味著需要設定自己的內部 DNS。這里存在三個棘手問題。。。

Actually, one of the biggest drawbacks of DNS for service discovery is that DNS was designed for a world in which we used standard ports for our services. HTTP is on port 80, SSH is on port 22, and so on. In that world, all you need is the IP of the host for the service, which is what an A record gives you. Today, even with private NATs and in some cases with IPv6, our services will listen on completely non-standard, sometimes random ports. Especially with Docker, we have many applications running on the same host.

事實上,DNS 解決 service discovery 一個最大的 drawbacks 是 她本身是為 現實世界 而設計的,她使用標準 ports 來提供服務。。。。。。。。。針對 Docker,有很多 application 運行在同一個 host 中。

You may be familiar with SRV records, or "service" records, which were designed to address this problem by providing the port as well as the IP in query responses. At least in terms of a data model, this brings DNS closer to addressing modern service discovery.

Unfortunately, SRV records alone are basically dead on arrival. Have you ever used a library or API to create a socket connection that didn"t ask for the port? Where do you tell it to do an SRV record lookup? You don"t. You can"t. It"s too late. Either software explicitly supports SRV records, or DNS is effectively just a tool for resolving names to host IPs.

Despite all this, DNS is still a marvel of engineering, and even SRV records will be useful to us yet. But for all these reasons, on top of the demands of building distributed systems, most large tech companies went down a different path.

Rise of the Lock Service

In 2006, Google released a paper describing Chubby, their distributed lock service. It implemented distributed consensus based on Paxos to provide a consistent, partition-tolerant (CP in CAP theorem) key-value store that could be used for coordinating leader elections, resource locking, and reliable low-volume storage. They began to use this for internal name resolution instead of DNS.

2006年 Google 發布了一片 paper 描述了 Chubby,分布式 lock service。

Eventually, the paper inspired an open source equivalent of Chubby called Zookeeper that spun out of the Hadoop Apache project. This became the de facto standard lock server in the open source world, mainly because there were no alternatives with the same properties of high availability and reliability over performance. The Paxos consensus algorithm was also non-trivial to implement.

最終,這片 paper 激發了與 Chubby 等價的 從 Hadoop Apache 項目 分離出來的 Zookeeper。她成為了開源世界中 lock server 的事實上的標準,主要因為,并無同樣高可用和可靠的替代品。 Paxos consensus 算法也一樣。

Zookeeper provides similar semantics as Chubby for coordinating distributed systems, and being a consistent and highly available key-value store makes it an ideal cluster configuration store and directory of services. It"s become a dependency to many major projects that require distributed coordination, including Hadoop, Storm, Mesos, Kafka, and others. Not surprisingly, it"s used in mostly other Apache projects, often deployed in larger tech companies. It is quite heavyweight and not terribly accessible to "everyday" developers.

Zookeeper 提供了與 Chubby 相似的語義,用來協調分布式系統,作為一個 consistent and highly available 的 key-value 存儲,使其成為理想的 cluster 配置存儲服務和目錄服務。她成為了很多需要 distributed coordination 項目的主要依賴部件, 包括 Hadoop, Storm, Mesos, Kafka, and others。毫不奇怪,用于其他的 Apache 項目,經常不屬于大型技術公司。她是一名超重量級選手, not terribly accessible to "everyday" developers.

About a year ago, a simpler alternative to the Paxos algorithm was published called Raft. This set the stage for a real Zookeeper alternative and, sure enough, etcd was soon introduced by CoreOS. Besides being based on a simpler consensus algorithm, etcd is overall simpler. It"s written in Go and lets you use HTTP to interact with it. I was extremely excited by etcd and used it in the initial architecture for Flynn.

大約一年之前,發布了一個叫做 Raft 的算法,這是一個相似并且可以替代 Paxos 算法的算法。她被作為階段性的 真正的 Zookeeper 的 alternative,十分確定的是,etcd 很快引入 CoreOS。因為,基于一個簡單的 consensus algorithm,etc 整體簡化了。用 Go 編寫,使用 HTTP 進行交互。etcd讓我他媽了個逼的嫉妒興奮,我用它初始化 Flynn 的架構。

Today there"s also Consul by Hashicorp, which builds on the ideas of etcd. I specifically explore Consul and lock servers more in my next post.

而今,有了 Hashiicorp 的 Consul,基于 etcd 的想法構建。我在下面,特別探索了 Consul 和 lock servers。

Service Discovery Solutions

Both Consul and etcd advertise themselves as service discovery solutions. Unfortunately, that"s not entirely true. They"re great service directories. But this is just part of a service discovery solution. So what"s missing?

We"re missing exactly how to get all our software, whether custom services or off-the-shelf software, to integrate with and use the service directory. This is particularly interesting to the Docker community, which ideally has portable solutions for anything that can run in a container.

A comprehensive solution to service discovery will have three legs:

A consistent (ideally), highly available service directory

A mechanism to register services and monitor service health

A mechanism to lookup and connect to services

We"ve got good technology for the first leg, but the remaining legs, despite how they sound, aren"t exactly trivial. Especially when ideally you want them to be automatic and "non-invasive." In other words, they work with non-cooperating software, not designed for a service discovery system. Luckily, Docker has both increased the demand for these properties and makes them easier to solve.

In a world where you have lots of services coming and going across many hosts, service discovery is extremely valuable, if not necessary. Even in smaller systems, a solid service discovery system should reduce the effort in configuring and connecting services together to nearly nothing. Adding the responsibility of service discovery to configuration management tools, or using a centralized message queue for everything are all-to-common alternatives that we know just don"t scale.

My goal with these posts is to help you understand and arrive at a good idea of what a service discovery system should actually encompass. The next few posts will take a deeper look at each of the above mentioned legs, touching on various approaches, and ultimately explaining what I ended up doing for my soon-to-be-released project, Consulate.

文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/26365.html

相關文章

  • 容器和應用程序:擴展、重構或重建?

    摘要:綜上所述,為使傳統應用程序容器化,有以下幾種路徑擴展重構或者重建。在中運行應用程序的最大障礙之一是臨時文件系統。大體來說,利用容器技術實現傳統應用程序的現代化并沒有硬性規則。 技術領域是不斷變化的,因此,任何應用程序都可能在很短時間內面臨過時甚至淘汰,更新換代的速度之快給人的感覺越來越強烈,我們如何使傳統應用程序保持活力不落伍?工程師想的可能是從頭開始重建傳統應用程序,這與公司的業務目...

    tigerZH 評論0 收藏0
  • Docker 與 Mesos 前生今世 | 數人云CTO肖德時@KVM分享實錄

    摘要:今天小數給大家帶來一篇技術正能量滿滿的分享來自社區線上群分享的實錄,分享嘉賓是數人云肖德時。第二級調度由被稱作的組件組成。它們是最小的部署單元,由統一創建調度管理。 今天小數給大家帶來一篇技術正能量滿滿的分享——來自KVM社區線上群分享的實錄,分享嘉賓是數人云CTO肖德時。 嘉賓介紹: 肖德時,數人云CTO 十五年計算機行業從業經驗,曾為紅帽 Engineering Service ...

    0x584a 評論0 收藏0
  • Docker學習之路(一)

    摘要:本人的學習筆記,主要是對我的第一本書從入門到實踐的學習記錄,并結合其他各種資源的學習,歡迎大牛們指點。最新的容器引入了容器如,容器不再僅僅是一個單純的運行環境。鏡像是基于聯合文件系統的一種層式的結構,由一系列指令一步步構建處理。 本人的學習筆記,主要是對《我的第一本Docker書》、《Docker —— 從入門到實踐》的學習記錄,并結合其他各種資源的學習,歡迎大牛們指點。 容器簡介 ...

    AWang 評論0 收藏0
  • Rancher部署Tr?fik實現微服務快速發現

    Tr?fik 是什么? Tr?fik 是一個為了讓部署微服務更加便捷而誕生的現代HTTP反向代理、負載均衡工具。它支持多種后臺 (Rancher、Docker、Swarm、Kubernetes、Marathon、Mesos、Consul、Etcd、Zookeeper、BoltDB、Rest API、file…) 來自動、動態的刷新配置文件,以實現快速地服務發現。 showImg(https://s...

    AaronYuan 評論0 收藏0
  • JHipster技術簡介

    摘要:本文簡單介紹是什么,為什么用,怎么用。技術棧是什么是一個開發平臺,用于生成,開發,部署和。實現需定制化源碼。 本文簡單介紹Jhipster是什么,為什么用Jhipster,怎么用Jhipster。 WHAT - 技術棧 JHipster是什么 JHipster是一個開發平臺,用于生成,開發,部署Spring Boot + Angular/React Web Application和Sp...

    hightopo 評論0 收藏0

發表評論

0條評論

最新活動
閱讀需要支付1元查看
<