How to use monitoring_data method in yandex-tank

Best Python code snippet using yandex-tank

monitor.py

Source: monitor.py Github

copy

Full Screen

1#!/​usr/​bin/​env python32# -*- coding: utf-8 -*-3from time import sleep4from datetime import datetime5import psycopg26import argparse7from metrics import *8parser = argparse.ArgumentParser(description='node monitoring script')9parser.add_argument('-n', '--host', type=str, default='127.0.0.1', help='hostname of the database service')10parser.add_argument('-p', '--port', type=int, default=5432, help='port behind which the database is listening')11parser.add_argument('-U', '--user', type=str, required=False, default='postgres', help='database user')12parser.add_argument('-P', '--password', type=str, required=True, help='database password')13parser.add_argument('-t', '--table', type=str, required=True, help='name of database table to insert data into')14parser.add_argument('-d', '--duration', type=int, required=False, help='how long to continue running (in seconds, default: indefinitely)')15parser.add_argument('-i', '--interval', type=float, required=False, default=1, help='how frequently to poll for data and insert into database (in seconds)')16args = parser.parse_args()17def indefinitely():18 while True:19 yield20def main():21 """22 Monitors sensors and parameters from a Linux machine with mainly23 psutil (https:/​/​github.com/​giampaolo/​psutil).24 """25 conn = None26 cursor = None27 try:28 conn = psycopg2.connect(user=args.user,29 password=args.password,30 host=args.host,31 port=args.port)32 cursor = conn.cursor()33 duration = range(args.duration) if args.duration else indefinitely()34 for _ in duration:35 memory = get_memory()36 swap = get_swap()37 cpu = get_cpu()38 cpu_load = get_cpu_load(percpu=True, percent=True)39 cpu_freq = get_cpu_freq(percpu=True)40 cpu_temp = get_temperature()41 disk = get_disk()42 network = get_network()43 entropy = get_entropy()44 up_time = get_uptime()45 pressure_stall = get_pressure_stall()46 monitoring_data = {47 'time': str(datetime.utcnow()),48 **memory,49 **swap,50 **cpu,51 **cpu_load,52 **cpu_freq,53 **cpu_temp,54 **disk,55 **network,56 'entropy': entropy,57 **up_time,58 **pressure_stall,59 }60 # {'psi_cpu_avg10': 24.18, 'psi_cpu_avg60': 26.22, 'psi_cpu_avg300': 23.26, 'psi_cpu_total': 272216609, 'psi_io_avg10': 0.0, 'psi_io_avg60': 0.14, 'psi_io_avg300': 2.05, 'psi_io_total': 154722139, 'psi_memory_avg10': 0.0, 'psi_memory_avg60': 0.0, 'psi_memory_avg300': 0.0, 'psi_memory_total': 0}61 insert_query = f"""INSERT INTO {args.table}62 (63 time,64 memory_used, memory_free, memory_available,65 swap_used, swap_free,66 ctx_switches, interrupts, cpu_soft_interrupts,67 cpu0_user, cpu0_nice, cpu0_system, cpu0_idle, cpu0_iowait, cpu0_irq, cpu0_softirq, cpu0_steal,68 cpu1_user, cpu1_nice, cpu1_system, cpu1_idle, cpu1_iowait, cpu1_irq, cpu1_softirq, cpu1_steal,69 cpu2_user, cpu2_nice, cpu2_system, cpu2_idle, cpu2_iowait, cpu2_irq, cpu2_softirq, cpu2_steal,70 cpu3_user, cpu3_nice, cpu3_system, cpu3_idle, cpu3_iowait, cpu3_irq, cpu3_softirq, cpu3_steal,71 cpu0_freq,72 cpu_temp,73 disk_total, disk_free, disk_used,74 disk_read_bytes, disk_read_count, disk_read_merged_count, disk_read_time,75 disk_write_bytes, disk_write_count, disk_write_merged_count, disk_write_time,76 bytes_recv, bytes_sent, dropin, dropout, errin, errout, packets_recv, packets_sent,77 entropy,78 up_time,79 psi_cpu_avg10, psi_cpu_avg60, psi_cpu_avg300, psi_cpu_total,80 psi_io_avg10, psi_io_avg60, psi_io_avg300, psi_io_total,81 psi_memory_avg10, psi_memory_avg60, psi_memory_avg300, psi_memory_total82 )83 VALUES84 (85 '{monitoring_data['time']}',86 {monitoring_data['memory_used']}, {monitoring_data['memory_free']}, {monitoring_data['memory_available']},87 {monitoring_data['swap_used']}, {monitoring_data['swap_free']},88 {monitoring_data['ctx_switches']}, {monitoring_data['interrupts']}, {monitoring_data['cpu_soft_interrupts']},89 {monitoring_data['cpu0_user']}, {monitoring_data['cpu0_nice']}, {monitoring_data['cpu0_system']}, {monitoring_data['cpu0_idle']}, {monitoring_data['cpu0_iowait']}, {monitoring_data['cpu0_irq']}, {monitoring_data['cpu0_softirq']}, {monitoring_data['cpu0_steal']},90 {monitoring_data['cpu1_user']}, {monitoring_data['cpu1_nice']}, {monitoring_data['cpu1_system']}, {monitoring_data['cpu1_idle']}, {monitoring_data['cpu1_iowait']}, {monitoring_data['cpu1_irq']}, {monitoring_data['cpu1_softirq']}, {monitoring_data['cpu1_steal']},91 {monitoring_data['cpu2_user']}, {monitoring_data['cpu2_nice']}, {monitoring_data['cpu2_system']}, {monitoring_data['cpu2_idle']}, {monitoring_data['cpu2_iowait']}, {monitoring_data['cpu2_irq']}, {monitoring_data['cpu2_softirq']}, {monitoring_data['cpu2_steal']},92 {monitoring_data['cpu3_user']}, {monitoring_data['cpu3_nice']}, {monitoring_data['cpu3_system']}, {monitoring_data['cpu3_idle']}, {monitoring_data['cpu3_iowait']}, {monitoring_data['cpu3_irq']}, {monitoring_data['cpu3_softirq']}, {monitoring_data['cpu3_steal']},93 {monitoring_data['cpu0_freq']},94 {monitoring_data['cpu_temp']},95 {monitoring_data['disk_total']}, {monitoring_data['disk_free']}, {monitoring_data['disk_used']},96 {monitoring_data['disk_read_bytes']}, {monitoring_data['disk_read_count']}, {monitoring_data['disk_read_merged_count']}, {monitoring_data['disk_read_time']},97 {monitoring_data['disk_write_bytes']}, {monitoring_data['disk_write_count']}, {monitoring_data['disk_write_merged_count']}, {monitoring_data['disk_write_time']},98 {monitoring_data['bytes_recv']}, {monitoring_data['bytes_sent']}, {monitoring_data['dropin']}, {monitoring_data['dropout']}, {monitoring_data['errin']}, {monitoring_data['errout']}, {monitoring_data['packets_recv']}, {monitoring_data['packets_sent']},99 {monitoring_data['entropy']},100 {monitoring_data['up_time']},101 {monitoring_data['psi_cpu_avg10']}, {monitoring_data['psi_cpu_avg60']}, {monitoring_data['psi_cpu_avg300']}, {monitoring_data['psi_cpu_total']},102 {monitoring_data['psi_io_avg10']}, {monitoring_data['psi_io_avg60']}, {monitoring_data['psi_io_avg300']}, {monitoring_data['psi_io_total']},103 {monitoring_data['psi_memory_avg10']}, {monitoring_data['psi_memory_avg60']}, {monitoring_data['psi_memory_avg300']}, {monitoring_data['psi_memory_total']}104 )"""105 cursor.execute(insert_query)106 conn.commit()107 sleep(args.interval)108 except (Exception, psycopg2.Error) as error:109 print(error)110 return111 finally:112 if cursor is not None:113 cursor.close()114 if conn is not None:115 conn.close()116if __name__ == "__main__":...

Full Screen

Full Screen

monitoring.py

Source:monitoring.py Github

copy

Full Screen

1import datetime2import time3import logging4import sys5import argparse6import json7import traceback8import os9from collections import namedtuple10import requests11from concurrent.futures import ThreadPoolExecutor12from xlrd import open_workbook13from sqlalchemy.sql import exists14from sqlalchemy.ext.declarative import declarative_base15from sqlalchemy import Table, Column, Integer, String, DateTime, Float16from sqlalchemy import create_engine, MetaData17from sqlalchemy.orm import sessionmaker18import settings19logger = logging.getLogger(__name__)20logging.basicConfig(filename=settings.PATH_TO_LOG_FILE, level=logging.DEBUG)21MonitoringData = namedtuple('MonitoringData', ['url', 'label', 'fetch'])22Base = declarative_base()23metadata = MetaData()24executor = ThreadPoolExecutor(max_workers=settings.COUNT_THREAD)25class Monitoring(Base):26 __tablename__ = 'monitoring'27 ts = Column(DateTime, default=datetime.datetime.utcnow)28 url = Column(String(250), primary_key=True)29 label = Column(String(250))30 response_time = Column(Float)31 status_code = Column(Integer, default=None)32 content_lenght = Column(Integer, default=None)33 def __repr__(self):34 print_data = self.url, self.label, str(self.status_code)35 return "<Monitoring('%s','%s', '%s')>" % (print_data)36def create_table():37 if settings.DROP_ALL_DB:38 if os.path.exists("settings.PATH_TO_DB_FILE"):39 os.remove(settings.PATH_TO_DB_FILE)40 engine = create_engine('sqlite:/​/​/​' + settings.PATH_TO_DB_FILE)41 Base.metadata.create_all(engine)42 Session = sessionmaker()43 Session.configure(bind=engine)44 Base.metadata.create_all(engine)45 session = Session()46 session.commit()47 return session48def createParser():49 parser = argparse.ArgumentParser()50 parser.add_argument('-p', '--path', default='test.xlsx')51 return parser52def add_data_to_json_file(data, exc_type, exc_value, exc_traceback):53 error = {"timestamp": str(data.ts),54 "url": data.url,55 "error": {"exception_type": str(exc_type),56 "exception_value": str(exc_value),57 "stack": str(traceback.format_stack())}}58 with open(settings.PATH_TO_DUMP_FILE, 'w+') as outfile:59 json.dump(error, outfile)60def data_from_exel(filename_exel, session):61 try:62 book = open_workbook(filename_exel, on_demand=True)63 except Exception:64 logger.info('File %s is not exist',65 filename_exel)66 return []67 monitoring_datas = []68 for name in book.sheet_names():69 logger.info('starting search data from bookname %s', name)70 sheet = book.sheet_by_name(name)71 for num in range(sheet.nrows)[1:]:72 monitoring_data = MonitoringData(url=sheet.row(num)[0].value,73 label=sheet.row(num)[1].value,74 fetch=sheet.row(num)[2].value)75 monitoring_datas.append(monitoring_data)76 return monitoring_datas77def update_fields(data, content_lenght, status_code):78 data.response_time = time.time()79 data.content_lenght = content_lenght80 data.status_code = status_code81 return data82def on_success(res, monitoring_data, monitoring, session):83 status_code = res.status_code84 if status_code == 200:85 content_lenght = len(res._content)86 else:87 content_lenght = None88 monitoring = update_fields(monitoring,89 content_lenght,90 status_code)91 is_eq_url = (Monitoring.url == monitoring_data.url)92 if session.query(exists().where(is_eq_url)).scalar():93 logger.info('data with this url %s is exist',94 monitoring_data.url)95 query_set = session.query(Monitoring)96 query_set_data = query_set.filter_by(url=monitoring_data.url)97 data_line = query_set_data.first()98 data_line = update_fields(data_line,99 content_lenght,100 status_code)101 session.add(data_line)102 else:103 session.add(monitoring)104 logger.info('write data to table %s',105 monitoring)106 session.commit()107def get_http_request(monitoring_datas, session):108 with requests.Session() as requests_session:109 for monitoring_data in monitoring_datas:110 if bool(monitoring_data.fetch):111 monitoring = Monitoring(url=monitoring_data.url,112 label=monitoring_data.label)113 try:114 future = executor.submit(requests_session.get,115 monitoring_data.url,116 timeout=settings.TIMEOUT)117 res = future.result()118 except Exception:119 add_data_to_json_file(monitoring, *sys.exc_info())120 else:121 on_success(res, monitoring_data, monitoring, session)122def main():123 parser = createParser()124 namespace = parser.parse_args()125 payload = {'filename_excel': namespace.path}126 session = create_table()127 monitoring_datas = data_from_exel(payload['filename_excel'],128 session)129 get_http_request(monitoring_datas, session)130if __name__ == '__main__':...

Full Screen

Full Screen

swift_metric.py

Source:swift_metric.py Github

copy

Full Screen

1from abstract_metric import Metric2from metrics_parser import SwiftMetricsParse3from threading import Thread4import datetime5import json6import socket7class SwiftMetric(Metric):8 _sync = {}9 _async = ['get_value', 'attach', 'detach', 'notify', 'start_consuming','stop_consuming', 'init_consum', 'stop_actor']10 _ref = ['attach', 'detach']11 _parallel = []12 def __init__(self, exchange, metric_id, routing_key):13 Metric.__init__(self)14 self.queue = metric_id15 self.routing_key = routing_key16 self.name = metric_id17 self.exchange = exchange18 self.parser_instance = SwiftMetricsParse()19 self.logstah_server = (self.logstash_host, self.logstash_port)20 self.last_metrics = dict()21 self.th = None22 23 def notify(self, body):24 """25 {"0.0.0.0:8080": {"AUTH_bd34c4073b65426894545b36f0d8dcce": 3}}26 """27 data = json.loads(body)28 Thread(target=self._send_data_to_logstash,args=(data, )).start()29 30 """31 try:32 for observer in self._observers[body_parsed.target]:33 observer.update(self.name, body_parsed)34 except:35 #print "fail", body_parsed36 pass37 """38 def get_value(self):39 return self.value40 def _send_data_to_logstash(self, data):41 monitoring_data = dict()42 try:43 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)44 for source_ip in data:45 monitoring_data['metric_name'] = self.queue46 monitoring_data['source_ip'] = source_ip.replace('.','-')47 for key, value in data[source_ip].items():48 49 monitoring_data['metric_target'] = key.replace('AUTH_', '')50 51 if (key in self.last_metrics and self.last_metrics[key]['value'] == 0) or key not in self.last_metrics:52 monitoring_data['value'] = 053 date = datetime.datetime.now() - datetime.timedelta(seconds=1)54 monitoring_data['@timestamp'] = str(date.isoformat())55 message = json.dumps(monitoring_data)+'\n' 56 sock.sendto(message, self.logstah_server)57 58 monitoring_data['value'] = value59 if '@timestamp' in monitoring_data:60 del monitoring_data['@timestamp']61 message = json.dumps(monitoring_data)+'\n' 62 sock.sendto(message, self.logstah_server) 63 self.last_metrics[key] = monitoring_data64 65 monitoring_data = dict()66 except:67 print "Error sending monitoring data to logstash"...

Full Screen

Full Screen

Blogs

Check out the latest blogs from LambdaTest on this topic:

Test Managers in Agile &#8211; Creating the Right Culture for Your SQA Team

I was once asked at a testing summit, “How do you manage a QA team using scrum?” After some consideration, I realized it would make a good article, so here I am. Understand that the idea behind developing software in a scrum environment is for development teams to self-organize.

A Step-By-Step Guide To Cypress API Testing

API (Application Programming Interface) is a set of definitions and protocols for building and integrating applications. It’s occasionally referred to as a contract between an information provider and an information user establishing the content required from the consumer and the content needed by the producer.

Considering Agile Principles from a different angle

In addition to the four values, the Agile Manifesto contains twelve principles that are used as guides for all methodologies included under the Agile movement, such as XP, Scrum, and Kanban.

Different Ways To Style CSS Box Shadow Effects

Have you ever visited a website that only has plain text and images? Most probably, no. It’s because such websites do not exist now. But there was a time when websites only had plain text and images with almost no styling. For the longest time, websites did not focus on user experience. For instance, this is how eBay’s homepage looked in 1999.

Continuous delivery and continuous deployment offer testers opportunities for growth

Development practices are constantly changing and as testers, we need to embrace change. One of the changes that we can experience is the move from monthly or quarterly releases to continuous delivery or continuous deployment. This move to continuous delivery or deployment offers testers the chance to learn new skills.

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run yandex-tank automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful