Python PIL.ImageEnhance.Contrast() Examples
The following are 30
code examples of PIL.ImageEnhance.Contrast().
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
PIL.ImageEnhance
, or try the search function
.
Example #1
Source File: functional.py From Global-Second-order-Pooling-Convolutional-Networks with MIT License | 10 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #2
Source File: functional.py From Facial-Expression-Recognition.Pytorch with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #3
Source File: opencv_functional.py From deep-smoke-machine with BSD 3-Clause "New" or "Revised" License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an mage. Args: img (numpy ndarray): numpy ndarray to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: numpy ndarray: Contrast adjusted image. """ # much faster to use the LUT construction than anything else I've tried # it's because you have to change dtypes multiple times if not _is_numpy_image(img): raise TypeError('img should be numpy Image. Got {}'.format(type(img))) table = np.array([ (i-74)*contrast_factor+74 for i in range (0,256)]).clip(0,255).astype('uint8') # enhancer = ImageEnhance.Contrast(img) # img = enhancer.enhance(contrast_factor) if img.shape[2] == 1: return cv2.LUT(img, table)[:,:,np.newaxis] else: return cv2.LUT(img, table)
Example #4
Source File: functional.py From Deep-Exemplar-based-Colorization with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #5
Source File: transforms.py From ACAN with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #6
Source File: image_functions.py From MAX-Framework with Apache License 2.0 | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #7
Source File: deepfry.py From FlameCogs with MIT License | 6 votes |
def _fry(img): e = ImageEnhance.Sharpness(img) img = e.enhance(100) e = ImageEnhance.Contrast(img) img = e.enhance(100) e = ImageEnhance.Brightness(img) img = e.enhance(.27) r, b, g = img.split() e = ImageEnhance.Brightness(r) r = e.enhance(4) e = ImageEnhance.Brightness(g) g = e.enhance(1.75) e = ImageEnhance.Brightness(b) b = e.enhance(.6) img = Image.merge('RGB', (r, g, b)) e = ImageEnhance.Brightness(img) img = e.enhance(1.5) temp = BytesIO() temp.name = 'deepfried.png' img.save(temp) temp.seek(0) return temp
Example #8
Source File: multimedia.py From chepy with GNU General Public License v3.0 | 6 votes |
def image_contrast(self, factor: int, extension: str = "png"): """Change image contrast Args: factor (int): Factor to increase the contrast by extension (str, optional): File extension of loaded image. Defaults to "png" Returns: Chepy: The Chepy object. """ image = Image.open(self._load_as_file()) image = self._force_rgb(image) fh = io.BytesIO() enhanced = ImageEnhance.Contrast(image).enhance(factor) enhanced.save(fh, extension) self.state = fh.getvalue() return self
Example #9
Source File: generator.py From VerifAI with BSD 3-Clause "New" or "Revised" License | 6 votes |
def modifyImageBscc(imageData, brightness, sharpness, contrast, color): """Update with brightness, sharpness, contrast and color.""" brightnessMod = ImageEnhance.Brightness(imageData) imageData = brightnessMod.enhance(brightness) sharpnessMod = ImageEnhance.Sharpness(imageData) imageData = sharpnessMod.enhance(sharpness) contrastMod = ImageEnhance.Contrast(imageData) imageData = contrastMod.enhance(contrast) colorMod = ImageEnhance.Color(imageData) imageData = colorMod.enhance(color) return imageData
Example #10
Source File: datasets.py From ICIAR2018 with MIT License | 6 votes |
def __getitem__(self, index): im, xpatch, ypatch, rotation, flip, enhance = np.unravel_index(index, self.shape) with Image.open(self.names[im]) as img: extractor = PatchExtractor(img=img, patch_size=PATCH_SIZE, stride=self.stride) patch = extractor.extract_patch((xpatch, ypatch)) if rotation != 0: patch = patch.rotate(rotation * 90) if flip != 0: patch = patch.transpose(Image.FLIP_LEFT_RIGHT) if enhance != 0: factors = np.random.uniform(.5, 1.5, 3) patch = ImageEnhance.Color(patch).enhance(factors[0]) patch = ImageEnhance.Contrast(patch).enhance(factors[1]) patch = ImageEnhance.Brightness(patch).enhance(factors[2]) label = self.labels[self.names[im]] return transforms.ToTensor()(patch), label
Example #11
Source File: transforms_tools.py From deep-image-retrieval with BSD 3-Clause "New" or "Revised" License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #12
Source File: common_qr.py From NanoWalletBot with BSD 3-Clause "New" or "Revised" License | 6 votes |
def account_by_qr(qr_file): qr = qrtools.QR() qr.decode(qr_file) # Try to increase contrast if not recognized if ('xrb_' not in qr.data): image = Image.open(qr_file) contrast = ImageEnhance.Contrast(image) image = contrast.enhance(7) image.save('{0}'.format(qr_file.replace('.jpg', '_.jpg')), 'JPEG') qr2 = qrtools.QR() qr2.decode('{0}'.format(qr_file.replace('.jpg', '_.jpg'))) #print(qr2.data) qr = qr2 returning = qr.data.replace('nano:', '').replace('xrb:', '').replace('nano://', '').replace('raiblocks://', '').replace('raiblocks:', '').split('?') # parsing amount if (len(returning) > 1): if ('amount=' in returning[1]): returning[1] = returning[1].replace('amount=', '') # don't use empty if (len(returning[1]) == 0): returning.pop() else: returning.pop() return returning
Example #13
Source File: functional.py From ACoL with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #14
Source File: CuteR.py From CuteR with GNU General Public License v3.0 | 6 votes |
def produce(txt,img,ver=5,err_crt = qrcode.constants.ERROR_CORRECT_H,bri = 1.0, cont = 1.0,\ colourful = False, rgba = (0,0,0,255),pixelate = False): """Produce QR code :txt: QR text :img: Image path / Image object :ver: QR version :err_crt: QR error correct :bri: Brightness enhance :cont: Contrast enhance :colourful: If colourful mode :rgba: color to replace black :pixelate: pixelate :returns: list of produced image """ if type(img) is Image.Image: pass elif type(img) is str: img = Image.open(img) else: return [] frames = [produce_impl(txt,frame.copy(),ver,err_crt,bri,cont,colourful,rgba,pixelate) for frame in ImageSequence.Iterator(img)] return frames
Example #15
Source File: functional.py From SPG with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #16
Source File: transform.py From SegAN with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL.Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL.Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #17
Source File: util.py From sfcn-opi with MIT License | 6 votes |
def img_test(i, type): """ visiualize certain image by showing all corresponding images. :param i: which image :param type: train, test or validation """ img = Image.open(os.path.join(p, 'cls_and_det', type, 'img{}'.format(i), 'img{}.bmp'.format(i))) imgd = Image.open( os.path.join(p, 'cls_and_det', type, 'img{}'.format(i), 'img{}_detection.bmp'.format(i))) imgc = Image.open( os.path.join(p, 'cls_and_det', type, 'img{}'.format(i), 'img{}_classification.bmp'.format(i))) imgv = Image.open( os.path.join(p, 'cls_and_det', type, 'img{}'.format(i), 'img{}_verifiy_classification.bmp'.format(i))) imgz = Image.open( os.path.join(p, 'cls_and_det', type, 'img{}'.format(i), 'img{}_verifiy_detection.bmp'.format(i))) contrast = ImageEnhance.Contrast(imgd) contrast2 = ImageEnhance.Contrast(imgc) img.show(img) imgv.show(imgv) imgz.show(imgz) contrast.enhance(20).show(imgd) contrast2.enhance(20).show(imgc)
Example #18
Source File: transforms.py From fast-depth with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #19
Source File: autocaptcha.py From sjtu-automata with GNU General Public License v3.0 | 6 votes |
def autocaptcha(path): """Auto identify captcha in path. Use pytesseract to identify captcha. Args: path: string, image path. Returns: string, OCR identified code. """ im = Image.open(path) im = im.convert('L') im = ImageEnhance.Contrast(im) im = im.enhance(3) img2 = Image.new('RGB', (150, 60), (255, 255, 255)) img2.paste(im.copy(), (25, 10)) # TODO: add auto environment detect return pytesseract.image_to_string(img2)
Example #20
Source File: transforms.py From self-supervised-depth-completion with MIT License | 6 votes |
def adjust_contrast(img, contrast_factor): """Adjust contrast of an Image. Args: img (PIL Image): PIL Image to be adjusted. contrast_factor (float): How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2. Returns: PIL Image: Contrast adjusted image. """ if not _is_pil_image(img): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(contrast_factor) return img
Example #21
Source File: auto_augment.py From pytorch-auto-augment with MIT License | 5 votes |
def __init__(self): self.policies = [ ['Invert', 0.1, 7, 'Contrast', 0.2, 6], ['Rotate', 0.7, 2, 'TranslateX', 0.3, 9], ['Sharpness', 0.8, 1, 'Sharpness', 0.9, 3], ['ShearY', 0.5, 8, 'TranslateY', 0.7, 9], ['AutoContrast', 0.5, 8, 'Equalize', 0.9, 2], ['ShearY', 0.2, 7, 'Posterize', 0.3, 7], ['Color', 0.4, 3, 'Brightness', 0.6, 7], ['Sharpness', 0.3, 9, 'Brightness', 0.7, 9], ['Equalize', 0.6, 5, 'Equalize', 0.5, 1], ['Contrast', 0.6, 7, 'Sharpness', 0.6, 5], ['Color', 0.7, 7, 'TranslateX', 0.5, 8], ['Equalize', 0.3, 7, 'AutoContrast', 0.4, 8], ['TranslateY', 0.4, 3, 'Sharpness', 0.2, 6], ['Brightness', 0.9, 6, 'Color', 0.2, 8], ['Solarize', 0.5, 2, 'Invert', 0, 0.3], ['Equalize', 0.2, 0, 'AutoContrast', 0.6, 0], ['Equalize', 0.2, 8, 'Equalize', 0.6, 4], ['Color', 0.9, 9, 'Equalize', 0.6, 6], ['AutoContrast', 0.8, 4, 'Solarize', 0.2, 8], ['Brightness', 0.1, 3, 'Color', 0.7, 0], ['Solarize', 0.4, 5, 'AutoContrast', 0.9, 3], ['TranslateY', 0.9, 9, 'TranslateY', 0.7, 9], ['AutoContrast', 0.9, 2, 'Solarize', 0.8, 3], ['Equalize', 0.8, 8, 'Invert', 0.1, 3], ['TranslateY', 0.7, 9, 'AutoContrast', 0.9, 1], ]
Example #22
Source File: transform.py From face-parsing.PyTorch with MIT License | 5 votes |
def __call__(self, im_lb): im = im_lb['im'] lb = im_lb['lb'] r_brightness = random.uniform(self.brightness[0], self.brightness[1]) r_contrast = random.uniform(self.contrast[0], self.contrast[1]) r_saturation = random.uniform(self.saturation[0], self.saturation[1]) im = ImageEnhance.Brightness(im).enhance(r_brightness) im = ImageEnhance.Contrast(im).enhance(r_contrast) im = ImageEnhance.Color(im).enhance(r_saturation) return dict(im = im, lb = lb, )
Example #23
Source File: textart.py From minqlx-plugins with GNU General Public License v3.0 | 5 votes |
def image_to_unicode(self, image, font_data, width=None, height=None): img = Image.open(image) if width and not height: ratio = width/img.size[0] img = img.resize((width, round(img.size[1] * ratio * 0.5)), Image.BILINEAR) elif not width and height: ratio = width/img.size[1] img = img.resize((round(img.size[0] * ratio), round(height * 0.5)), Image.BILINEAR) else: img = img.resize((width, round(height * 0.5)), Image.BILINEAR) img = img.convert("L") # Enhance! #contrast = ImageEnhance.Contrast(img) #img = contrast.enhance(0.7) #sharpen = ImageEnhance.Sharpness(img) #img = sharpen.enhance(1.5) # Process data. keys = sorted(list(font_data.keys())) out = "" for y in range(img.size[1]): for x in range(img.size[0]): lum = img.getpixel((x, y)) index = bisect.bisect(keys, lum) - 1 out += chr(random.choice(font_data[keys[index]])) out += "\n" return out
Example #24
Source File: mydataset.py From ocr.pytorch with MIT License | 5 votes |
def randomColor(image): """ 对图像进行颜色抖动 :param image: PIL的图像image :return: 有颜色色差的图像image """ random_factor = np.random.randint( 0, 31 ) / 10. # 随机因子 color_image = ImageEnhance.Color( image ).enhance( random_factor ) # 调整图像的饱和度 random_factor = np.random.randint( 10, 21 ) / 10. # 随机因子 brightness_image = ImageEnhance.Brightness( color_image ).enhance( random_factor ) # 调整图像的亮度 random_factor = np.random.randint( 10, 21 ) / 10. # 随机因1子 contrast_image = ImageEnhance.Contrast( brightness_image ).enhance( random_factor ) # 调整图像对比度 random_factor = np.random.randint( 0, 31 ) / 10. # 随机因子 return ImageEnhance.Sharpness( contrast_image ).enhance( random_factor ) # 调整图像锐度
Example #25
Source File: deepfry.py From BotHub with Apache License 2.0 | 5 votes |
def deepfry(img: Image) -> Image: colours = ( (randint(50, 200), randint(40, 170), randint(40, 190)), (randint(190, 255), randint(170, 240), randint(180, 250)) ) img = img.copy().convert("RGB") # Crush image to hell and back img = img.convert("RGB") width, height = img.width, img.height img = img.resize((int(width ** uniform(0.8, 0.9)), int(height ** uniform(0.8, 0.9))), resample=Image.LANCZOS) img = img.resize((int(width ** uniform(0.85, 0.95)), int(height ** uniform(0.85, 0.95))), resample=Image.BILINEAR) img = img.resize((int(width ** uniform(0.89, 0.98)), int(height ** uniform(0.89, 0.98))), resample=Image.BICUBIC) img = img.resize((width, height), resample=Image.BICUBIC) img = ImageOps.posterize(img, randint(3, 7)) # Generate colour overlay overlay = img.split()[0] overlay = ImageEnhance.Contrast(overlay).enhance(uniform(1.0, 2.0)) overlay = ImageEnhance.Brightness(overlay).enhance(uniform(1.0, 2.0)) overlay = ImageOps.colorize(overlay, colours[0], colours[1]) # Overlay red and yellow onto main image and sharpen the hell out of it img = Image.blend(img, overlay, uniform(0.1, 0.4)) img = ImageEnhance.Sharpness(img).enhance(randint(5, 300)) return img
Example #26
Source File: augmentations.py From augmix with Apache License 2.0 | 5 votes |
def contrast(pil_img, level): level = float_parameter(sample_level(level), 1.8) + 0.1 return ImageEnhance.Contrast(pil_img).enhance(level) # operation that overlaps with ImageNet-C's test set
Example #27
Source File: macintoshplus.py From NotSoBot with MIT License | 5 votes |
def contrast(im, k=3): enhancer = ImageEnhance.Contrast(im) return enhancer.enhance(k)
Example #28
Source File: Transform.py From VideoSuperResolution with MIT License | 5 votes |
def call(self, img: Image.Image): contrast = self.value return ImageEnhance.Contrast(img).enhance(contrast)
Example #29
Source File: test_imageenhance.py From python3_ios with BSD 3-Clause "New" or "Revised" License | 5 votes |
def test_sanity(self): # FIXME: assert_image # Implicit asserts no exception: ImageEnhance.Color(hopper()).enhance(0.5) ImageEnhance.Contrast(hopper()).enhance(0.5) ImageEnhance.Brightness(hopper()).enhance(0.5) ImageEnhance.Sharpness(hopper()).enhance(0.5)
Example #30
Source File: CVTransforms.py From ext_portrait_segmentation with MIT License | 5 votes |
def __call__(self, image, label): if random.random() < set_ratio: return image, label image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) random_factor = np.random.randint(4, 17) / 10. color_image = ImageEnhance.Color(image).enhance(random_factor) random_factor = np.random.randint(4, 17) / 10. brightness_image = ImageEnhance.Brightness(color_image).enhance(random_factor) random_factor = np.random.randint(6, 15) / 10. contrast_image = ImageEnhance.Contrast(brightness_image).enhance(random_factor) random_factor = np.random.randint(8, 13) / 10. image = ImageEnhance.Sharpness(contrast_image).enhance(random_factor) return np.uint8(np.array(image)[:,:,::-1]), label