内存共享 管道
Since our team is using Bitbucket Pipelines more and more, something we are running into more often, is Bitbucket Pipelines failing with the error Container ‘docker' exceeded memory limit. In this article I will tell you how to fix this issue. The scope of this article is solving memory issues while using the Docker service container within Bitbucket Pipelines, but specific parts can definitely be used when you are not using this specific service container.
由于我们的团队越来越多地使用Bitbucket Pipelines,因此我们经常遇到的问题是Bitbucket Pipelines失败,并显示错误Container 'docker' exceeded memory limit 。 在本文中,我将告诉您如何解决此问题。 本文的范围是解决在Bitbucket Pipelines中使用Docker服务容器时的内存问题,但是当您不使用此特定服务容器时,肯定可以使用特定部分。
Exceeded memory error Bitbucket Pipelines 超出内存错误Bitbucket管道Use (option).size and definitions.services.docker.memory in your bitbucket-pipelines.yml to increase the memory limit in your pipeline and service container.
在bitbucket-pipelines.yml中使用(option).size和definitions.services.docker.memory来增加管道和服务容器中的内存限制。
...definitions: services: docker: memory: 7128pipelines: branches: master: - step: size: 2x name: 'PIPELINE NAME' script: - ......We are using Bitbucket Pipelines for several tasks; automated tests, linting code, versioning (increase version numbers + creating/updating release branches) and building & deploying artifacts. The latter can be, for example;
我们正在使用Bitbucket管道执行多项任务; 自动化测试,整理代码,版本控制(增加版本号+创建/更新发行分支)以及构建和部署工件。 后者可以是,例如;
Building Docker images (Python) and upload them to AWS ECR, then trigger a ‘force new deployment’ of a service on ECS. 构建Docker映像(Python)并将其上传到AWS ECR,然后在ECS上触发服务的“强制新部署”。 Building Python libraries and publish them to our private PyPI registry. 构建Python库并将其发布到我们的私有PyPI注册中心。 Building Angular applications and upload the build files to S3, then trigger a CloudFront invalidation. 构建Angular应用程序并将构建文件上传到S3,然后触发CloudFront失效。Building Docker containers within Bitbucket Pipelines is causing our problems, most of the times. We are a data science lab and not strange with large machine learning and deep learning packages. Sometimes installing these packages within a Docker container can take ages and there is nothing more frustrating than waiting a long time for something that will eventually fail.
在大多数时候,在Bitbucket Pipelines中构建Docker容器是导致我们出现问题的原因。 我们是一个数据科学实验室,对大型机器学习和深度学习软件包并不陌生。 有时,将这些软件包安装在Docker容器中可能需要花费很长时间,并且比长时间等待最终会失败的失败更令人沮丧。
Let us assume the following, we have a bitbucket-pipelines.yml with the content below. This file defines the actions taken in the pipelines. We also have a Dockerfile in the root of our repository containing a process requiring a lot of memory, that Dockerfile will actually be build in the pipeline.
让我们假设以下内容,我们有一个bitbucket-pipelines.yml ,其内容如下。 该文件定义了管道中采取的操作。 我们在存储库的根目录中还有一个Dockerfile ,其中包含一个需要大量内存的进程,该Dockerfile实际上将在管道中构建。
bitbucket-pipelines.yml
bitbucket-pipelines.yml
pipelines: branches: master: - step: name: 'Deploy' script: - curl -O https://bootstrap.pypa.io/get-pip.py && python get-pip.py && pip install awscli - docker build -t $PROJECT_NAME . - docker tag $PROJECT_NAME:latest $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME:latest - docker push $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME services: - dockerWhat this pipeline does:
该管道的作用:
Install pip 安装点子 Install awcli to communicate with AWS (we have set repository variables containing AWS credentials) 安装awcli以与AWS通信(我们已设置包含AWS凭证的存储库变量) Build the image 建立形象 Tag the image 标记图像 Push the image to AWS ECR 将映像推送到AWS ECRThe services: — docker lines allow us to use Docker in this pipeline.
services: — docker Docker行允许我们在此管道中使用Docker。
Note: we have actually build our own image containing all requirements we need in our pipelines (awscli, linting et cetera), so the first line in our script curl -0.. is actually phased out in our current setup.
注意:实际上,我们已经构建了自己的映像,其中包含管道中所需的所有需求(awscli,linting等),因此脚本curl -0..的第一行实际上已在当前设置中逐步淘汰。
The Docker-in-Docker daemon used for Docker operations in Pipelines is treated as a service container, and so has a default memory limit of 1024 MB. This can also be adjusted to any value between 128 MB and 3072/7128 MB by changing the memory setting on the built-in docker service in the definitions section.
Pipelines中用于Docker操作的Docker-in-Docker守护程序被视为服务容器,因此默认内存限制为1024 MB。 也可以通过在定义部分更改内置docker服务上的内存设置,将其调整为128 MB到3072/7128 MB之间的任何值。
Source: support.atlassian.com
资料来源: support.atlassian.com
The default available memory per step is 4096 MB. As you can see in the above quote, Docker is treated as a service container, which sets the limit of this service container to 1024 MB by default. The first thing we can do is try to increase this limit to 3072 MB. This is the remaining memory you can use within the pipeline, the other 1024 MB is already reserved for overhead. If you are not using a service container, you can skip this part since you will already have the maximum resources available in your step.
每步的默认可用内存为4096 MB。 如您在上面的引用中看到的那样,Docker被视为服务容器,默认情况下该服务容器的限制设置为1024 MB。 我们可以做的第一件事就是尝试将此限制增加到3072 MB。 这是您可以在管道中使用的剩余内存,其他1024 MB已被保留用于开销。 如果您不使用服务容器,则可以跳过此部分,因为您将在步骤中拥有最大的可用资源。
Increasing the memory without tuning the size value will not affect the build minutes. You can increase the memory limit of the Docker service container with the following addition to the existing bitbucket-pipelines.yml file.
在不调整size值的情况下增加内存不会影响构建时间。 您可以在现有bitbucket-pipelines.yml文件中添加以下内容,以增加Docker服务容器的内存限制。
definitions: services: docker: memory: 3072pipelines: branches: master: - step: name: 'Deploy' script: - curl -O https://bootstrap.pypa.io/get-pip.py && python get-pip.py && pip install awscli - docker build -t $PROJECT_NAME . - docker tag $PROJECT_NAME:latest $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME:latest - docker push $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME services: - dockerWhen tweaking the memory of the Docker service container is not enough and your pipelines still fail due to memory issues, you can double the resources within your pipelines (to a total of 8192 MB) with size.Please note that this will actually double the build minutes, so do not increase the size by default. You might want to look into other ways to reduce the required memory within your pipelines.
当调整泊坞窗服务容器的内存是不够的,你的管道仍然会由于内存问题,你可以双击你的管道内的资源(总共8192 MB) size 。请注意,这实际上将构建翻番分钟,因此默认情况下不要增加大小。 您可能需要研究其他方法来减少管道中所需的内存。
You can set size for the whole pipeline, or you can set it per step. If you have a lot of steps in your pipeline and just one step failing due to memory issues, I recommend to double the resources only for the step requiring the extra resources, so you will only be charged twice the build minutes in that particular step. If you double the resources for the whole pipeline, you will be charged twice the build minutes for the whole pipeline and that may not be necessary. Down below an example to increase the memory of a single step.
您可以设置整个管道的size ,也可以按步骤设置size 。 如果您的管道中有很多步骤,而只有一个步骤由于内存问题而失败,那么我建议仅将资源加倍用于需要额外资源的步骤,因此在该特定步骤中,您只需支付两次构建时间。 如果将整个管道的资源增加一倍,您将需要为整个管道的构建时间支付两倍的费用,这可能不是必需的。 下面以增加一个步骤的内存为例。
definitions: services: docker: memory: 7128pipelines: branches: master: - step: size: 2x name: 'Deploy' script: - curl -O https://bootstrap.pypa.io/get-pip.py && python get-pip.py && pip install awscli - docker build -t $PROJECT_NAME . - docker tag $PROJECT_NAME:latest $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME:latest - docker push $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME services: - dockerAs you can see, we have added a new property within the step, called size and gave it the value 2x. You can choose between 1x and 2x for now, where 1x is the default. You may also have noticed that I have increased the memory of definitions.services.docker to 7128 MB (8192 MB total - 1024 MB reserved). Like I mentioned before, if you are not using a service container, setting the size to a value of 2x is enough to double the resources in a step. Also note that if you are using the Docker service container in another step, you need to increase the size of that step too.
如您所见,我们在步骤中添加了一个名为size的新属性,并将其值2x 。 您现在可以在1x和2x之间进行选择,其中1x是默认值。 您可能还注意到,我已经将definitions.services.docker的内存增加到7128 MB(总计8192 MB-保留1024 MB)。 就像我之前提到的,如果您不使用服务容器,则将size设置为2x即可使资源一步增加一倍。 另请注意,如果在另一步骤中使用Docker服务容器,则也需要增加该步骤的大小。
If you want to increase the resources for your whole pipeline, you can simple do the following. Like I said before, you will be charged twice the build minutes for your whole pipeline.
如果要增加整个管道的资源,则可以简单地执行以下操作。 就像我之前说过的,整个管道的构建时间将收取两倍的费用。
options: size: 2xdefinitions: services: docker: memory: 7128pipelines: branches: master: - step: name: 'Deploy' script: - curl -O https://bootstrap.pypa.io/get-pip.py && python get-pip.py && pip install awscli - docker build -t $PROJECT_NAME . - docker tag $PROJECT_NAME:latest $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME:latest - docker push $ACCOUNT_ID.dkr.ecr.eu-west-1.amazonaws.com/$PROJECT_NAME services: - dockerWhen you are using a lot of images requiring a specific package, like tensorflow, fbprophet or pytorch, it may also be interesting to create base images already containing specific combinations, like version X of Python in combination with fbprophet version X. If you do this, you will not have to install these packages over and over again in a pipeline of a specific project, which will decrease the amount of build minutes for the specific project. Remember to update these base images when needed and update every Dockerfile using the base image.
当您使用大量需要特定软件包的图像(例如tensorflow,fbprophet或pytorch)时,创建已经包含特定组合的基本图像(例如Python的X版本与fbprophet版本X结合)也可能很有趣。 ,则不必在特定项目的管道中一遍又一遍地安装这些软件包,这将减少特定项目的构建时间。 请记住,在需要时更新这些基本映像,并使用该基本映像更新每个Dockerfile 。
If you have any questions, suggestions or feedback regarding this article, please let me know!
如果您对本文有任何疑问,建议或反馈,请告诉我!
翻译自: https://levelup.gitconnected.com/solving-bitbucket-pipelines-memory-issues-62c5a236ef96
内存共享 管道
相关资源:微信小程序源码-合集6.rar