How to use _c method in Molotov

Best Python code snippet using molotov_python

default.py

Source: default.py Github

copy

Full Screen

1#!/​usr/​bin/​env python32# Copyright (c) Facebook, Inc. and its affiliates.3# This source code is licensed under the MIT license found in the4# LICENSE file in the root directory of this source tree.5from typing import List, Optional, Union6import numpy as np7from habitat import get_config as get_task_config8from habitat.config import Config as CN9DEFAULT_CONFIG_DIR = "configs/​"10CONFIG_FILE_SEPARATOR = ","11# -----------------------------------------------------------------------------12# EXPERIMENT CONFIG13# -----------------------------------------------------------------------------14_C = CN()15_C.BASE_TASK_CONFIG_PATH = "configs/​tasks/​pointnav.yaml"16_C.TASK_CONFIG = CN() # task_config will be stored as a config node17_C.CMD_TRAILING_OPTS = [] # store command line options as list of strings18_C.TRAINER_NAME = "ppo"19_C.ENV_NAME = "NavRLEnv"20_C.SIMULATOR_GPU_ID = 021_C.TORCH_GPU_ID = 022_C.VIDEO_OPTION = ["disk", "tensorboard"]23_C.TENSORBOARD_DIR = "tb"24_C.VIDEO_DIR = "video_dir"25_C.TEST_EPISODE_COUNT = 226_C.EVAL_CKPT_PATH_DIR = "data/​checkpoints" # path to ckpt or path to ckpts dir27_C.NUM_PROCESSES = 1628_C.SENSORS = ["RGB_SENSOR", "DEPTH_SENSOR"]29_C.CHECKPOINT_FOLDER = "data/​checkpoints"30_C.NUM_UPDATES = 1000031_C.LOG_INTERVAL = 1032_C.LOG_FILE = "train.log"33_C.CHECKPOINT_INTERVAL = 5034_C.VIS_INTERVAL = 20035# -----------------------------------------------------------------------------36# EVAL CONFIG37# -----------------------------------------------------------------------------38_C.EVAL = CN()39# The split to evaluate on40_C.EVAL.SPLIT = "val"41_C.EVAL.USE_CKPT_CONFIG = True42# -----------------------------------------------------------------------------43# REINFORCEMENT LEARNING (RL) ENVIRONMENT CONFIG44# -----------------------------------------------------------------------------45_C.RL = CN()46_C.RL.REWARD_MEASURE = "distance_to_goal"47_C.RL.SUCCESS_MEASURE = "spl"48_C.RL.SUCCESS_REWARD = 10.049_C.RL.SLACK_REWARD = -0.0150# -----------------------------------------------------------------------------51# PROXIMAL POLICY OPTIMIZATION (PPO)52# -----------------------------------------------------------------------------53_C.RL.PPO = CN()54_C.RL.PPO.clip_param = 0.255_C.RL.PPO.ppo_epoch = 456_C.RL.PPO.num_mini_batch = 1657_C.RL.PPO.value_loss_coef = 0.558_C.RL.PPO.entropy_coef = 0.0159_C.RL.PPO.lr = 7e-460_C.RL.PPO.eps = 1e-561_C.RL.PPO.max_grad_norm = 0.562_C.RL.PPO.num_steps = 563_C.RL.PPO.use_gae = True64_C.RL.PPO.use_linear_lr_decay = False65_C.RL.PPO.use_linear_clip_decay = False66_C.RL.PPO.gamma = 0.9967_C.RL.PPO.tau = 0.9568_C.RL.PPO.reward_window_size = 5069_C.RL.PPO.use_normalized_advantage = True70_C.RL.PPO.hidden_size = 51271# -----------------------------------------------------------------------------72# DECENTRALIZED DISTRIBUTED PROXIMAL POLICY OPTIMIZATION (DD-PPO)73# -----------------------------------------------------------------------------74_C.RL.DDPPO = CN()75_C.RL.DDPPO.sync_frac = 0.676_C.RL.DDPPO.distrib_backend = "GLOO"77_C.RL.DDPPO.rnn_type = "LSTM"78_C.RL.DDPPO.num_recurrent_layers = 279_C.RL.DDPPO.backbone = "resnet50"80_C.RL.DDPPO.pretrained_weights = "data/​ddppo-models/​gibson-2plus-resnet50.pth"81# Loads pretrained weights82_C.RL.DDPPO.pretrained = False83# Loads just the visual encoder backbone weights84_C.RL.DDPPO.pretrained_encoder = False85# Whether or not the visual encoder backbone will be trained86_C.RL.DDPPO.train_encoder = True87# Whether or not to reset the critic linear layer88_C.RL.DDPPO.reset_critic = True89# -----------------------------------------------------------------------------90# ORBSLAM2 BASELINE91# -----------------------------------------------------------------------------92_C.ORBSLAM2 = CN()93_C.ORBSLAM2.SLAM_VOCAB_PATH = "habitat_baselines/​slambased/​data/​ORBvoc.txt"94_C.ORBSLAM2.SLAM_SETTINGS_PATH = (95 "habitat_baselines/​slambased/​data/​mp3d3_small1k.yaml"96)97_C.ORBSLAM2.MAP_CELL_SIZE = 0.198_C.ORBSLAM2.MAP_SIZE = 4099_C.ORBSLAM2.CAMERA_HEIGHT = get_task_config().SIMULATOR.DEPTH_SENSOR.POSITION[100 1101]102_C.ORBSLAM2.BETA = 100103_C.ORBSLAM2.H_OBSTACLE_MIN = 0.3 * _C.ORBSLAM2.CAMERA_HEIGHT104_C.ORBSLAM2.H_OBSTACLE_MAX = 1.0 * _C.ORBSLAM2.CAMERA_HEIGHT105_C.ORBSLAM2.D_OBSTACLE_MIN = 0.1106_C.ORBSLAM2.D_OBSTACLE_MAX = 4.0107_C.ORBSLAM2.PREPROCESS_MAP = True108_C.ORBSLAM2.MIN_PTS_IN_OBSTACLE = (109 get_task_config().SIMULATOR.DEPTH_SENSOR.WIDTH /​ 2.0110)111_C.ORBSLAM2.ANGLE_TH = float(np.deg2rad(15))112_C.ORBSLAM2.DIST_REACHED_TH = 0.15113_C.ORBSLAM2.NEXT_WAYPOINT_TH = 0.5114_C.ORBSLAM2.NUM_ACTIONS = 3115_C.ORBSLAM2.DIST_TO_STOP = 0.05116_C.ORBSLAM2.PLANNER_MAX_STEPS = 500117_C.ORBSLAM2.DEPTH_DENORM = get_task_config().SIMULATOR.DEPTH_SENSOR.MAX_DEPTH118_C.attention = CN()119_C.attention.n_head = 4120_C.attention.d_model = 512 + 32 + 1121_C.attention.d_k = 512 + 32 + 1122_C.attention.d_v = 512 + 32 + 1123_C.attention.dropout = 0.1124_C.attention.lsh = CN()125_C.attention.lsh.bucket_size = 10126_C.attention.lsh.n_hashes = 4127_C.attention.lsh.add_local_attn_hash = False128_C.attention.lsh.causal = True129_C.attention.lsh.attn_chunks = 8130_C.attention.lsh.random_rotations_per_head = False131_C.attention.lsh.attend_across_buckets = True132_C.attention.lsh.allow_duplicate_attention = True133_C.attention.lsh.num_mem_kv = 0134_C.attention.lsh.one_value_head = False135_C.attention.lsh.full_attn_thres = 'none'136_C.attention.lsh.return_attn = False137_C.attention.lsh.post_attn_dropout = 0.1138_C.attention.lsh.dropout = 0.1139_C.attention.lsh.use_full_attn = False140_C.memory = CN()141_C.memory.embedding_size = 512 + 32142_C.memory.memory_size = 100143_C.memory.pose_dim = 5144def get_config(145 config_paths: Optional[Union[List[str], str]] = None,146 opts: Optional[list] = None,147) -> CN:148 r"""Create a unified config with default values overwritten by values from149 `config_paths` and overwritten by options from `opts`.150 Args:151 config_paths: List of config paths or string that contains comma152 separated list of config paths.153 opts: Config options (keys, values) in a list (e.g., passed from154 command line into the config. For example, `opts = ['FOO.BAR',155 0.5]`. Argument can be used for parameter sweeping or quick tests.156 """157 config = _C.clone()158 if config_paths:159 if isinstance(config_paths, str):160 if CONFIG_FILE_SEPARATOR in config_paths:161 config_paths = config_paths.split(CONFIG_FILE_SEPARATOR)162 else:163 config_paths = [config_paths]164 for config_path in config_paths:165 config.merge_from_file(config_path)166 config.TASK_CONFIG = get_task_config(config.BASE_TASK_CONFIG_PATH)167 if opts:168 config.CMD_TRAILING_OPTS = opts169 config.merge_from_list(opts)170 config.freeze()...

Full Screen

Full Screen

defaults.py

Source: defaults.py Github

copy

Full Screen

1from yacs.config import CfgNode as CN2_C = CN()3_C.MODEL = CN()4_C.MODEL.META_ARCHITECTURE = 'SSDDetector'5_C.MODEL.DEVICE = "cuda"6# match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5)7_C.MODEL.THRESHOLD = 0.58_C.MODEL.NUM_CLASSES = 219# Hard negative mining10_C.MODEL.NEG_POS_RATIO = 3 # negative : positive = 3:111_C.MODEL.CENTER_VARIANCE = 0.112_C.MODEL.SIZE_VARIANCE = 0.213# ---------------------------------------------------------------------------- #14# Backbone15# ---------------------------------------------------------------------------- #16_C.MODEL.BACKBONE = CN()17_C.MODEL.BACKBONE.NAME = 'vgg'18_C.MODEL.BACKBONE.OUT_CHANNELS = (512, 1024, 512, 256, 256, 256)19_C.MODEL.BACKBONE.PRETRAINED = True20_C.MODEL.BACKBONE.ISFREEZE = False21_C.MODEL.BACKBONE.RFBTYPE = 'NONE' # NONE or Advanced or Basic or Original22# for hrnet23_C.MODEL.BACKBONE.C = 3224_C.MODEL.BACKBONE.BN_MOMENTUM = 0.125# ---------------------------------------------------------------------------- #26# Neck27# ---------------------------------------------------------------------------- #28_C.MODEL.NECK = CN()29_C.MODEL.NECK.NAME = 'NONE'30# -----------------------------------------------------------------------------31# PRIORS32# -----------------------------------------------------------------------------33_C.MODEL.PRIORS = CN()34_C.MODEL.PRIORS.FEATURE_MAPS = [38, 19, 10, 5, 3, 1] # 需要依据网络的特征图大小来进行调节35_C.MODEL.PRIORS.STRIDES = [8, 16, 32, 64, 100, 300]36# MIN_SIZES 和 MAX_SIZES 是由 ratio 等参数计算出来的, 但是 MIN_SIZES 和 MAX 是依据数据集的尺度分布来计算的, 决定不同特征图上prior box的边长37_C.MODEL.PRIORS.MIN_SIZES = [30, 60, 111, 162, 213, 264] # these parameters are for voc0712 dataset 38_C.MODEL.PRIORS.MAX_SIZES = [60, 111, 162, 213, 264, 315]39_C.MODEL.PRIORS.ASPECT_RATIOS = [[2], [2, 3], [2, 3], [2, 3], [2], [2]]40# When has 1 aspect ratio, every location has 4 boxes, 2 ratio 6 boxes.41# #boxes = 2 + #ratio * 242_C.MODEL.PRIORS.BOXES_PER_LOCATION = [4, 6, 6, 6, 4, 4] # number of boxes per feature map location43_C.MODEL.PRIORS.CLIP = True44# -----------------------------------------------------------------------------45# Box Head46# -----------------------------------------------------------------------------47_C.MODEL.BOX_HEAD = CN()48_C.MODEL.BOX_HEAD.NAME = 'SSDBoxHead'49_C.MODEL.BOX_HEAD.PREDICTOR = 'SSDBoxPredictor'50# -----------------------------------------------------------------------------51# INPUT52# -----------------------------------------------------------------------------53_C.INPUT = CN()54# Image size55_C.INPUT.IMAGE_SIZE = 30056# Values to be used for image normalization, RGB layout57_C.INPUT.PIXEL_MEAN = [123, 117, 104]58# -----------------------------------------------------------------------------59# Dataset60# -----------------------------------------------------------------------------61_C.DATASETS = CN()62# List of the dataset names for training, as present in paths_catalog.py63_C.DATASETS.TRAIN = ()64# List of the dataset names for testing, as present in paths_catalog.py65_C.DATASETS.TEST = ()66# -----------------------------------------------------------------------------67# DataLoader68# -----------------------------------------------------------------------------69_C.DATA_LOADER = CN()70# Number of data loading threads71_C.DATA_LOADER.NUM_WORKERS = 872_C.DATA_LOADER.PIN_MEMORY = True73# ---------------------------------------------------------------------------- #74# Solver75# ---------------------------------------------------------------------------- #76_C.SOLVER = CN()77# train configs78_C.SOLVER.MAX_ITER = 12000079_C.SOLVER.LR_STEPS = [80000, 100000]80_C.SOLVER.GAMMA = 0.181_C.SOLVER.BATCH_SIZE = 3282_C.SOLVER.LR = 1e-3 # original 1e-383_C.SOLVER.MOMENTUM = 0.984_C.SOLVER.WEIGHT_DECAY = 5e-485_C.SOLVER.WARMUP_FACTOR = 1.0 /​ 3 # original 1.0 /​ 386_C.SOLVER.WARMUP_ITERS = 500 # original 50087# ---------------------------------------------------------------------------- #88# Specific test options89# ---------------------------------------------------------------------------- #90_C.TEST = CN()91_C.TEST.NMS_THRESHOLD = 0.4592_C.TEST.CONFIDENCE_THRESHOLD = 0.0193_C.TEST.MAX_PER_CLASS = -194_C.TEST.MAX_PER_IMAGE = 10095_C.TEST.BATCH_SIZE = 10...

Full Screen

Full Screen

Blogs

Check out the latest blogs from LambdaTest on this topic:

Why Agile Teams Have to Understand How to Analyze and Make adjustments

How do we acquire knowledge? This is one of the seemingly basic but critical questions you and your team members must ask and consider. We are experts; therefore, we understand why we study and what we should learn. However, many of us do not give enough thought to how we learn.

How to increase and maintain team motivation

The best agile teams are built from people who work together as one unit, where each team member has both the technical and the personal skills to allow the team to become self-organized, cross-functional, and self-motivated. These are all big words that I hear in almost every agile project. Still, the criteria to make a fantastic agile team are practically impossible to achieve without one major factor: motivation towards a common goal.

Dec’22 Updates: The All-New LT Browser 2.0, XCUI App Automation with HyperExecute, And More!

Greetings folks! With the new year finally upon us, we’re excited to announce a collection of brand-new product updates. At LambdaTest, we strive to provide you with a comprehensive test orchestration and execution platform to ensure the ultimate web and mobile experience.

How To Get Started With Cypress Debugging

One of the most important tasks of a software developer is not just writing code fast; it is the ability to find what causes errors and bugs whenever you encounter one and the ability to solve them quickly.

How to Recognize and Hire Top QA / DevOps Engineers

With the rising demand for new services and technologies in the IT, manufacturing, healthcare, and financial sector, QA/ DevOps engineering has become the most important part of software companies. Below is a list of some characteristics to look for when interviewing a potential candidate.

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run Molotov automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful