一千萬個為什麽

搜索

共享Web服務器上的Jenkins節點(代理) - 什麽是正確的權限



我有一個擁有多個站點的共享Web服務器,每個站點都有一個專用用戶,用作所有者和組apache。

我想在服務器上安裝Jenkins節點,但它如何能夠更改文件並像 git pull 那樣運行命令,我認為使用 runuser 會錯過這一點。

該節點上的腳本將基本上運行 git pulldrush 命令(Drupal站點),來自遠程服務器的rsync等等。 我還需要在最後運行chmod和chown,但也許對於那些命令,我​​將只使用sudoers文件(?)。

謝謝。

轉載註明原文: 共享Web服務器上的Jenkins節點(代理) - 什麽是正確的權限

一共有 1 個回答:

一個不太安全的,但非常簡單而有效的解決方案,沒有安全隱患(不需要憑證交換或認證),可以在“郵箱”中建立一個基於文件的“消息”交換方案 - 一個眾所周知的文件系統位置設置一次並由 root 擁有)與2個目錄:

  • 由jenkins用戶擁有,它創建包含部署請求信息的請求文件,每個站點一個
  • 由apache組擁有,每個站點專用用戶創建自己的響應文件,其中包含針對其站點的部署請求的請求處理信息

當jenkins處理達到特定站點的部署階段時,它會創建一個相應的請求文件,並在其中包含必要的信息。

每個站點用戶定期(例如cron驅動的)檢查與其站點相關的請求文件,根據他們自己的站點策略處理請求,並在各自的響應中提供狀態更新文件,jenkins用戶定期檢查這些文件。

當請求處理完成後,jenkins用戶刪除請求文件,表明它收到了“消息”,然後網站用戶定期作業可以刪除相應的響應文件。

請求響應文件的名稱可用於對特定網站和請求標識進行編碼,以便定期檢查不必摸摸多個文件。

只需將“郵箱”放置在可從所有這些機器訪問的共享文件系統上,該方案就可以輕松地跨機器工作(例如,如果某些站點已遷移到其他服務器)。

好的,按要求舉例。只是一個基本的骨架,在Python中,希望自我記錄。

先決條件:

sudo mkdir -p /var/message_box/requests
sudo chown jenkins /var/message_box/requests
sudo chmod go-w /var/message_box/requests
sudo mkdir /var/message_box/responses
sudo chgrp apache /var/message_box/responses
sudo chmod g+w /var/message_box/responses

mailbox.py 文件:

#!/usr/bin/python2.7 -u

import logging, os, re, getpass, sys, time, yaml

class Mailbox(object):

    base_dir = '/var/message_box'
    request_filename_format = '%s.%s.yaml'  # username.id.yaml

    def __init__(self):
        pass

    @property
    def request_dir(self):
        return os.path.join(self.base_dir, 'requests')

    @property
    def response_dir(self):
        return os.path.join(self.base_dir, 'responses')

    def msg_filename(self, user, request_id):
        return self.request_filename_format % (user, request_id)

    def request_file(self, user, request_id):
        return os.path.join(self.request_dir, self.msg_filename(user, request_id))

    def response_file(self, user, request_id):
        return os.path.join(self.response_dir, self.msg_filename(user, request_id))

    def create_msg_file(self, user, request_id, data, is_response=False):
        assert user and request_id and data and isinstance(data, dict)
        msg_file = self.response_file(user, request_id) if is_response else \
                   self.request_file(user, request_id)
        with open(msg_file, 'w') as fd:
            fd.write(yaml.dump(data))

    def msg_file_data(self, user, request_id, is_response=False):
        msg_file = self.response_file(user, request_id) if is_response else \
                   self.request_file(user, request_id)
        if os.path.exists(msg_file):
            with open(msg_file) as fd:
                data = yaml.load(fd.read())
            if data and isinstance(data, dict):  # expected data format
                return data
        return None

    def create_request(self, user, request_id, data):
        self.create_msg_file(user, request_id, data)
        logging.info('created request %s for %s' % (request_id, user))

    def create_response(self, request_id, status, response_data=None):
        assert status
        user = getpass.getuser()
        self.create_msg_file(user, request_id, {'status': status, 'data': response_data}, is_response=True)
        logging.info('created response %s with status %s for %s' % (request_id, status, user))


    def handle_requests(self):
        user = getpass.getuser()
        while True:  # keep handling requests indefinitely
            time.sleep(1)  # new request polling rate, in seconds
            for filename in os.listdir(self.request_dir):
                m = re.match('(.*)\.(.*)\.yaml', filename)
                if not m:  # not a valid request filename
                    continue
                [username, request_id] = m.groups()
                if username != user:  # not a request for this user
                    continue
                if os.path.exists(self.response_file(user, request_id)):
                    # request handling already started
                    # you may add here recovery code for request handling interrupted for whatever reason
                    continue
                msg_data = self.msg_file_data(user, request_id)
                if not msg_data:  # unexpected data format
                    continue

                logging.info('received request %s: %s' % (request_id, msg_data))

                # mark the request handling start
                self.create_response(request_id, 'in_progress')

                time.sleep(5)  # mock-up, replace with whatever request handling means

                # mark the request handling done
                self.create_response(request_id, 'done')  # you can add response data to the dict if needed

                logging.info('handled request %s, waiting for confirmation' % request_id)

                while True:  # wait for confirmation receipt before cleaning up
                    time.sleep(1)  # confirmation receipt polling rate, in seconds
                    if not os.path.exists(self.request_file(user, request_id)):
                        # the deletion of the request file is the confirmation receipt
                        logging.info('confirmation for request %s received, cleaning up' % request_id)
                        os.unlink(self.response_file(user, request_id))  # cleanup response file
                        break


    def execute_deployment(self, deployment_id, deployment_user, deployment_data):

        self.create_request(deployment_user, deployment_id, deployment_data)
        started = False
        while True:  # wait until it's done
            time.sleep(1)  # polling rate, in seconds
            msg_data = self.msg_file_data(deployment_user, deployment_id, is_response=True)
            if msg_data:
                status = msg_data.get('status')
                if status:
                    if not started:
                        logging.info('request %s handling started' % deployment_id)
                        started = True
                    if status == 'done':  # job completed
                        logging.info('request %s handling completed, cleaning up' % deployment_id)
                        # cleanup request file, which is the confirmation receipt
                        os.unlink(self.request_file(deployment_user, deployment_id))
                        break

def usage(err_msg, option_parser):
    if err_msg:
        logging.error('%s\n\n%s\n' % (err_msg, option_parser.format_help()))
    sys.exit(-1)


if __name__ == "__main__":
    import optparse

    logging.basicConfig(level=logging.DEBUG, format="%(levelname)5s  %(asctime)s %(filename)s:%(lineno)d] %(message)s")
    os.umask(022)

    p = optparse.OptionParser()
    p.add_option('-c', '--command', action='store', dest='command', choices=['deploy', 'handler'],
                 help='command/mode: , mandatory', default=None)
    p.add_option('-i', '--ID', action='store', dest='id',
                 help='deployment ID, mandatory for deploy command', default=None)
    p.add_option('-u', '--user', action='store', dest='user',
                 help='deployment user, mandatory for deploy command', default=None)
    p.add_option('-a', '--artifact', action='store', dest='artifact',
                 help='deployment artifact, mandatory for deploy command', default=None)

    opts, _ = p.parse_args()

    if not opts.command:
        usage('command is mandatory', p)

    mailbox = Mailbox()

    if opts.command == 'deploy':
        if not opts.id or not opts.user or not opts.artifact:
            usage('ID and user must be specified for deploy command', p)
        data = {'artifact': opts.artifact}
        mailbox.execute_deployment(opts.id, opts.user, data)

    elif opts.command == 'handler':
        mailbox.handle_requests()

jenkins用戶驅動部署,部署信息被黑客攻擊為這個例子的一個字符串 - 'artifact':

$ ./mailbox.py -c deploy -i 20 -u dancorn -a artifact
 INFO  2017-10-17 13:32:34,663 mailbox.py:49] created request 20 for dancorn
 INFO  2017-10-17 13:32:35,666 mailbox.py:109] request 20 handling started
 INFO  2017-10-17 13:32:40,678 mailbox.py:112] request 20 handling completed, cleaning up
$ ./mailbox.py -c deploy -i 123 -u dancorn -a artifact
 INFO  2017-10-17 13:33:32,359 mailbox.py:49] created request 123 for dancorn
 INFO  2017-10-17 13:33:33,362 mailbox.py:109] request 123 handling started
 INFO  2017-10-17 13:33:38,375 mailbox.py:112] request 123 handling completed, cleaning up
$

apache用戶將啟動在這個例子中仍然運行的處理程序(可以轉換為守護進程或cron驅動的方法):

$ ./mailbox.py -c handler
 INFO  2017-10-17 13:32:34,819 mailbox.py:77] received request 20: {'artifact': 'artifact'}
 INFO  2017-10-17 13:32:34,821 mailbox.py:55] created response 20 with status in_progress for dancorn
 INFO  2017-10-17 13:32:39,827 mailbox.py:55] created response 20 with status done for dancorn
 INFO  2017-10-17 13:32:39,827 mailbox.py:87] handled request 20, waiting for confirmation
 INFO  2017-10-17 13:32:40,828 mailbox.py:93] confirmation for request 20 received, cleaning up
 INFO  2017-10-17 13:33:32,888 mailbox.py:77] received request 123: {'artifact': 'artifact'}
 INFO  2017-10-17 13:33:32,889 mailbox.py:55] created response 123 with status in_progress for dancorn
 INFO  2017-10-17 13:33:37,891 mailbox.py:55] created response 123 with status done for dancorn
 INFO  2017-10-17 13:33:37,891 mailbox.py:87] handled request 123, waiting for confirmation
 INFO  2017-10-17 13:33:38,893 mailbox.py:93] confirmation for request 123 received, cleaning up