How to use extract_file_paths method in gabbi

Best Python code snippet using gabbi_python

csv_pipeline2.py

Source: csv_pipeline2.py Github

copy

Full Screen

...25 source_paths_t2 = list()26 source_paths_t1 = list()27 target_paths = list()28 for subject in sorted(os.listdir(os.path.join(self._source_dir))):29 source_paths_t1.append(extract_file_paths(os.path.join(self._source_dir, subject, "T1")))30 source_paths_t2.append(extract_file_paths(os.path.join(self._source_dir, subject, "T2")))31 target_paths.append(extract_file_paths(os.path.join(self._source_dir, subject, "Labels")))32 subjects = np.arange(1, 11)33 source_paths_t1 = natural_sort([item for sublist in source_paths_t1 for item in sublist])34 source_paths_t2 = natural_sort([item for sublist in source_paths_t2 for item in sublist])35 target_paths = natural_sort([item for sublist in target_paths for item in sublist])36 with open(os.path.join(self._output_dir, output_filename), mode='a+') as output_file:37 writer = csv.writer(output_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)38 writer.writerow(39 ["T1", "T2", "labels", "subject", "T1_min", "T1_max", "T1_mean", "T1_std", "T2_min", "T2_max",40 "T2_mean", "T2_std"])41 for source_path, source_path_t2, target_path, subject in zip(source_paths_t1, source_paths_t2,42 target_paths, subjects):43 self.LOGGER.info("Processing file {}".format(source_path))44 t1 = ToNumpyArray()(source_path)45 t2 = ToNumpyArray()(source_path_t2)46 csv_data = np.vstack((source_path, source_path_t2, target_path, subject, str(t1.min()), str(t1.max()),47 str(t1.mean()), str(t1.std()), str(t2.min()), str(t2.max()), str(t2.mean()),48 str(t2.std())))49 for item in range(csv_data.shape[1]):50 writer.writerow(51 [csv_data[0][item], csv_data[1][item], csv_data[2][item], csv_data[3][item], csv_data[4][item],52 csv_data[5][item], csv_data[6][item], csv_data[7][item], csv_data[8][item], csv_data[9][item],53 csv_data[10][item], csv_data[11][item]])54 output_file.close()55class ToCSVMRBrainSPipeline(object):56 LOGGER = logging.getLogger("MRBrainSPipeline")57 def __init__(self, root_dir: str, output_dir: str):58 self._source_dir = root_dir59 self._output_dir = output_dir60 self._transforms = Compose([ToNumpyArray()])61 def run(self, output_filename: str):62 source_paths_t1_1mm = list()63 source_paths_t2 = list()64 source_paths_t1 = list()65 source_paths_t1_ir = list()66 target_paths = list()67 target_paths_training = list()68 for subject in sorted(os.listdir(os.path.join(self._source_dir))):69 source_paths_t2.append(extract_file_paths(os.path.join(self._source_dir, subject, "T2_FLAIR")))70 source_paths_t1_ir.append(extract_file_paths(os.path.join(self._source_dir, subject, "T1_IR")))71 source_paths_t1_1mm.append(extract_file_paths(os.path.join(self._source_dir, subject, "T1_1mm")))72 source_paths_t1.append(extract_file_paths(os.path.join(self._source_dir, subject, "T1")))73 target_paths.append(extract_file_paths(os.path.join(self._source_dir, subject, "LabelsForTesting")))74 target_paths_training.append(75 extract_file_paths(os.path.join(self._source_dir, subject, "LabelsForTraining")))76 subjects = np.arange(1, 6)77 with open(os.path.join(self._output_dir, output_filename), mode='a+') as output_file:78 writer = csv.writer(output_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)79 writer.writerow(80 ["T1_1mm", "T1", "T1_IR", "T2_FLAIR", "LabelsForTesting", "LabelsForTraining", "subject", "T1_min",81 "T1_max", "T1_mean", "T1_std", "T2_min", "T2_max", "T2_mean", "T2_std"])82 for source_path_t2, source_path_t1_ir, source_path_t1_1mm, source_path_t1, target_path, target_path_training, subject in zip(83 source_paths_t2, source_paths_t1_ir, source_paths_t1_1mm, source_paths_t1, target_paths,84 target_paths_training, subjects):85 self.LOGGER.info("Processing file {}".format(source_path_t1))86 t1 = ToNumpyArray()(source_path_t1[0])87 t2 = ToNumpyArray()(source_path_t2[0])88 csv_data = np.vstack((89 source_path_t1_1mm, source_path_t1, source_path_t1_ir, source_path_t2, target_path,90 target_path_training, subject, str(t1.min()), str(t1.max()), str(t1.mean()), str(t1.std()),91 str(t2.min()), str(t2.max()), str(t2.mean()), str(t2.std())))92 for item in range(csv_data.shape[1]):93 writer.writerow(94 [csv_data[0][item], csv_data[1][item], csv_data[2][item], csv_data[3][item], csv_data[4][item],95 csv_data[5][item], csv_data[6][item], csv_data[7][item], csv_data[8][item], csv_data[9][item],96 csv_data[10][item], csv_data[11][item], csv_data[12][item], csv_data[13][item],97 csv_data[14][item]])98 output_file.close()99class ToCSVABIDEPipeline(object):100 LOGGER = logging.getLogger("ABIDEPipeline")101 def __init__(self, root_dir: str, output_dir: str):102 self._source_dir = root_dir103 self._output_dir = output_dir104 self._transforms = Compose([ToNumpyArray()])105 def run(self, output_filename: str):106 source_paths = list()107 target_paths = list()108 subjects = list()109 sites = list()110 for dir in sorted(os.listdir(self._source_dir)):111 source_paths_ = extract_file_paths(os.path.join(self._source_dir, dir, "mri", "T1"), "T1.nii.gz")112 target_paths_ = extract_file_paths(os.path.join(self._source_dir, dir, "mri", "Labels"), "Labels.nii.gz")113 subject_ = dir114 source_paths.append(source_paths_)115 target_paths.append(target_paths_)116 if len(source_paths_) is not 0:117 match = re.search('(?P<site>.*)_(?P<patient_id>[0-9]*)', str(dir))118 site_ = match.group("site")119 sites.append(site_)120 subjects.append(subject_)121 source_paths = list(filter(None, source_paths))122 target_paths = list(filter(None, target_paths))123 with open(os.path.join(self._output_dir, output_filename), mode='a+') as output_file:124 writer = csv.writer(output_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)125 writer.writerow(["T1", "labels", "subject", "site", "min", "max", "mean", "std"])126 for source_path, target_path, subject, site in zip(source_paths, target_paths, subjects, sites):...

Full Screen

Full Screen

image_slicer_test.py

Source: image_slicer_test.py Github

copy

Full Screen

...13 PATH = "/​mnt/​md0/​Data/​Preprocessed/​iSEG/​Training/​Patches/​Aligned/​Full/​T1/​10"14 # TARGET_PATH = "/​mnt/​md0/​Data/​Preprocessed/​iSEG/​Patches/​Aligned/​label/​6"15 FULL_IMAGE_PATH = "/​mnt/​md0/​Data/​Preprocessed/​iSEG/​Training/​Aligned/​T1/​subject-10-T1.nii"16 def setUp(self) -> None:17 paths = extract_file_paths(self.PATH)18 self._dataset = iSEGSegmentationFactory.create(natural_sort(paths), None, modalities=Modality.T1,19 dataset_id=0)20 self._reconstructor = ImageReconstructor([128, 160, 128], [32, 32, 32], [8, 8, 8], )21 transforms = Compose([ToNumpyArray(), PadToPatchShape([1, 32, 32, 32], [1, 8, 8, 8])])22 self._full_image = transforms(self.FULL_IMAGE_PATH)23 def test_should_output_reconstructed_image(self):24 all_patches = []25 all_labels = []26 for current_batch, input in enumerate(self._dataset):27 all_patches.append(input.x)28 all_labels.append(input.y)29 img = self._reconstructor.reconstruct_from_patches_3d(all_patches)30 plt.imshow(img[64, :, :], cmap="gray")31 plt.show()32 np.testing.assert_array_almost_equal(img, self._full_image.squeeze(0), 6)33class ImageReconstructorMRBrainSTest(unittest.TestCase):34 PATH = "/​mnt/​md0/​Data/​Preprocessed_4/​MRBrainS/​DataNii/​TrainingData/​1/​T1"35 # TARGET_PATH = "/​mnt/​md0/​Data/​Preprocessed/​iSEG/​Patches/​Aligned/​label/​6"36 FULL_IMAGE_PATH = "/​mnt/​md0/​Data/​Preprocessed/​MRBrainS/​DataNii/​TrainingData/​1/​T1/​T1.nii.gz"37 def setUp(self) -> None:38 paths = extract_file_paths(self.PATH)39 self._dataset = MRBrainSSegmentationFactory.create(natural_sort(paths), None, modalities=Modality.T1,40 dataset_id=0)41 self._reconstructor = ImageReconstructor([256, 256, 192], [1, 32, 32, 32], [1, 8, 8, 8], )42 transforms = Compose([ToNumpyArray(), PadToPatchShape([1, 32, 32, 32], [1, 8, 8, 8])])43 self._full_image = transforms(self.FULL_IMAGE_PATH)44 def test_should_output_reconstructed_image(self):45 all_patches = []46 all_labels = []47 for current_batch, input in enumerate(self._dataset):48 all_patches.append(input.x)49 all_labels.append(input.y)50 img = self._reconstructor.reconstruct_from_patches_3d(all_patches)51 plt.imshow(img[64, :, :], cmap="gray")52 plt.show()53 np.testing.assert_array_almost_equal(img, self._full_image.squeeze(0), 6)54class ImageReconstructorABIDETest(unittest.TestCase):55 PATH = "/​home/​pierre-luc-delisle/​ABIDE/​5.1/​Stanford_0051160/​mri/​patches/​image"56 # TARGET_PATH = "/​home/​pierre-luc-delisle/​ABIDE/​5.1/​Stanford_0051160/​mri/​patches/​labels"57 FULL_IMAGE_PATH = "/​home/​pierre-luc-delisle/​ABIDE/​5.1/​Stanford_0051160/​mri/​real_brainmask.nii.gz"58 def setUp(self) -> None:59 paths = extract_file_paths(self.PATH)60 self._dataset = ABIDESegmentationFactory.create(natural_sort(paths), None, modalities=Modality.T1,61 dataset_id=0)62 self._reconstructor = ImageReconstructor([224, 224, 192], [1, 32, 32, 32], [1, 8, 8, 8], )63 transforms = Compose([ToNumpyArray(), PadToPatchShape([1, 32, 32, 32], [1, 8, 8, 8])])64 self._full_image = transforms(self.FULL_IMAGE_PATH)65 def test_should_output_reconstructed_image(self):66 all_patches = []67 all_labels = []68 for current_batch, input in enumerate(self._dataset):69 all_patches.append(input.x)70 all_labels.append(input.y)71 img = self._reconstructor.reconstruct_from_patches_3d(all_patches)72 plt.imshow(img[112, :, :], cmap="gray")73 plt.show()...

Full Screen

Full Screen

json_to_csv.py

Source: json_to_csv.py Github

copy

Full Screen

...5class processing:6 def __init__(self):7 pass89 def extract_file_paths(self):10 file_path = fp11 self.snomed_json_loc = fp.run("snomed_json")12 self.data_loc = fp.run("initial_data")1314 def create_dataframe(self):15 dataframe = pd.read_json(self.snomed_json_loc)16 self.df = dataframe1718 def cloumn_to_delete(self):19 self.to_remove = "system"2021 def delete_idx(self):22 self.df = self.df.drop(self.to_remove, axis=1)2324 def converter(self):25 self.df.to_csv(self.data_loc, index = None)2627class preprocess:28 def extract_file_paths(self):29 file_path = fp30 self.file_name = file_path.run("data")3132 def filechecker(self): 33 check = path.exists(self.file_name)34 return check3536class postprocess:37 def __init__(self):38 pass3940 def runall_pre(self):41 self.pre_process = preprocess()42 self.pre_process.extract_file_paths()43 self.file_checker = self.pre_process.filechecker()44 return self.file_checker4546 def runall_processing(self):47 process = processing()48 process.extract_file_paths()49 process.create_dataframe()50 process.cloumn_to_delete()51 process.delete_idx()52 process.converter()53 54def runall():55 post_process = postprocess() ...

Full Screen

Full Screen

Blogs

Check out the latest blogs from LambdaTest on this topic:

Pair testing strategy in an Agile environment

Pair testing can help you complete your testing tasks faster and with higher quality. But who can do pair testing, and when should it be done? And what form of pair testing is best for your circumstance? Check out this blog for more information on how to conduct pair testing to optimize its benefits.

Migrating Test Automation Suite To Cypress 10

There are times when developers get stuck with a problem that has to do with version changes. Trying to run the code or test without upgrading the package can result in unexpected errors.

Continuous Integration explained with jenkins deployment

Continuous integration is a coding philosophy and set of practices that encourage development teams to make small code changes and check them into a version control repository regularly. Most modern applications necessitate the development of code across multiple platforms and tools, so teams require a consistent mechanism for integrating and validating changes. Continuous integration creates an automated way for developers to build, package, and test their applications. A consistent integration process encourages developers to commit code changes more frequently, resulting in improved collaboration and code quality.

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run gabbi automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful