當前位置: 華文星空 > 親子

能否預測出一個嬰兒成人後的長相?

2012-10-21親子

讓我們用科技的力量來預測一下寶寶未來的長相!

本案例可根據一張父親和母親的正臉照片,生成孩子的照片,並且可以調節參數,看看不同性別和年齡孩子的長相。

為保證照片的生成效果,上傳父母的照片時盡量上傳能露出五官且淺色底的照片。

本案例僅用於學習交流,請勿用於其他用途。

另外,由於技術不完善的原因,生成的孩子照片可能會有扭曲或失真,你可以更換不同的父母照片,重新生成孩子照片,直到達到滿意的生成效果為止。

下面開始按步驟執行本案例。

1. 安裝所需的模組

本步驟耗時約4分鐘

!pip install imutils moviepy dlib

2. 下載程式碼和模型檔

import os import moxing as mox root_dir = '/home/ma-user/work/ma_share/' code_dir = os.path.join(root_dir, 'BabyGAN') if not os.path.exists(os.path.join(root_dir, 'BabyGAN.zip')): mox.file.copy('obs://arthur-1/BabyGAN/BabyGAN.zip', os.path.join(root_dir, 'BabyGAN.zip')) os.system('cd %s; unzip BabyGAN.zip' % root_dir) os.chdir(code_dir)

3. 載入相關模組及模型

import cv2 import math import pickle import imageio import warnings import PIL.Image import numpy as np from glob import glob from PIL import Image import tensorflow as tf from random import randrange import moviepy.editor as mpy import matplotlib.pyplot as plt from IPython.display import clear_output from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter import config import dnnlib import dnnlib.tflib as tflib from encoder.generator_model import Generator %matplotlib inline warnings.filterwarnings("ignore")

載入模型檔,本程式碼塊只可執行一次,如果發生報錯,請restart kernel,重新執行所有程式碼

tflib.init_tf() URL_FFHQ = "./karras2019 stylegan-ffhq-1024x1024.pkl" with dnnlib.util.open_url(URL_FFHQ, cache_dir=config.cache_dir) as f: generator_network, discriminator_network, Gs_network = pickle.load(f) generator = Generator(Gs_network, batch_size=1, randomize_noise=False) model_scale = int(2 * (math.log(1024, 2) - 1)) age_direction = np.load('./ffhq_dataset/latent_directions/age.npy') horizontal_direction = np.load('./ffhq_dataset/latent_directions/angle_horizontal.npy') vertical_direction = np.load('./ffhq_dataset/latent_directions/angle_vertical.npy') eyes_open_direction = np.load('./ffhq_dataset/latent_directions/eyes_open.npy') gender_direction = np.load('./ffhq_dataset/latent_directions/gender.npy') smile_direction = np.load('./ffhq_dataset/latent_directions/smile.npy') def get_watermarked(pil_image: Image) -> Image: try: image = cv2.cvtColor(np.array(pil_image), cv2.COLOR_RGB2BGR) (h, w) = image.shape[:2] image = np.dstack([image, np.ones((h, w), dtype="uint8") * 255]) pct = 0.08 full_watermark = cv2.imread('./media/logo.png', cv2.IMREAD_UNCHANGED) (fwH, fwW) = full_watermark.shape[:2] wH = int(pct * h * 2) wW = int((wH * fwW) / fwH * 0.1) watermark = cv2.resize(full_watermark, (wH, wW), interpolation=cv2.INTER_AREA) overlay = np.zeros((h, w, 4), dtype="uint8") (wH, wW) = watermark.shape[:2] overlay[h - wH - 10: h - 10, 10: 10 + wW] = watermark output = image.copy() cv2.addWeighted(overlay, 0.5, output, 1.0, 0, output) rgb_image = cv2.cvtColor(output, cv2.COLOR_BGR2RGB) return Image.fromarray(rgb_image) except: return pil_image def generate_final_images(latent_vector, direction, coeffs, i): new_latent_vector = latent_vector.copy() new_latent_vector[:8] = (latent_vector + coeffs * direction)[:8] new_latent_vector = new_latent_vector.reshape((1, 18, 512)) generator.set_dlatents(new_latent_vector) img_array = generator.generate_images()[0] img = PIL.Image.fromarray(img_array, 'RGB') if size[0] >= 512: img = get_watermarked(img) img_path = "./for_animation/" + str(i) + ".png" img.thumbnail(animation_size, PIL.Image.ANTIALIAS) img.save(img_path) face_img.append(imageio.imread(img_path)) clear_output() return img def generate_final_image(latent_vector, direction, coeffs): new_latent_vector = latent_vector.copy() new_latent_vector[:8] = (latent_vector + coeffs * direction)[:8] new_latent_vector = new_latent_vector.reshape((1, 18, 512)) generator.set_dlatents(new_latent_vector) img_array = generator.generate_images()[0] img = PIL.Image.fromarray(img_array, 'RGB') if size[0] >= 512: img = get_watermarked(img) img.thumbnail(size, PIL.Image.ANTIALIAS) img.save("face.png") if download_image == True: files.download("face.png") return img def plot_three_images(imgB, fs=10): f, axarr = plt.subplots(1, 3, figsize=(fs, fs)) axarr[0].imshow(Image.open('./aligned_images/father_01.png')) axarr[0].title.set_text("Father's photo") axarr[1].imshow(imgB) axarr[1].title.set_text("Child's photo") axarr[2].imshow(Image.open('./aligned_images/mother_01.png')) axarr[2].title.set_text("Mother's photo") plt.setp(plt.gcf().get_axes(), xticks=[], yticks=[]) plt.show()

4. 準備好父親和母親的照片

本案例已各準備好一張預設的父母親照片,可在左側資訊看板的檔資源管理視窗中,進入到 ma_share/BabyGAN 目錄,再進入到 father_image 或 mother_image 目錄即可看到已提供的父母親照片,如下圖所示:

如果你需更換父母親的照片,請檢視本文第11節「更換父親和母親的照片」

if len(glob(os.path.join('./father_image', '*.jpg'))) != 1 or (not os.path.exists('./father_image/father.jpg')): raise Exception('請在 ma_share/BabyGAN/father_image 目錄下準備一張父親的照片,且命名為father.jpg') if len(glob(os.path.join('./mother_image', '*.jpg'))) != 1 or (not os.path.exists('./mother_image/mother.jpg')): raise Exception('請在 ma_share/BabyGAN/father_image 目錄下準備一張母親的照片,且命名為mother.jpg')

5. 獲取父親的臉部區域,並進行人臉對齊

!python align_images.py ./father_image ./aligned_images

檢視父親的人臉

if os.path.isfile('./aligned_images/father_01.png'): pil_father = Image.open('./aligned_images/father_01.png') (fat_width, fat_height) = pil_father.size resize_fat = max(fat_width, fat_height) / 256 display(pil_father.resize((int(fat_width / resize_fat), int(fat_height / resize_fat)))) else: raise ValueError('No face was found or there is more than one in the photo.')

6. 獲取母親的臉部區域,並進行人臉對齊

!python align_images.py ./mother_image ./aligned_images

檢視母親的人臉

if os.path.isfile('./aligned_images/mother_01.png'): pil_mother = Image.open('./aligned_images/mother_01.png') (mot_width, mot_height) = pil_mother.size resize_mot = max(mot_width, mot_height) / 256 display(pil_mother.resize((int(mot_width / resize_mot), int(mot_height / resize_mot)))) else: raise ValueError('No face was found or there is more than one in the photo.')

7. 提取人臉特征

本步驟耗時約3分鐘

!python encode_images.py \ --early_stopping False \ --lr=0.25 \ --batch_size=2 \ --iterations=100 \ --output_video=False \ ./aligned_images \ ./generated_images \ ./latent_representations if len(glob(os.path.join('./generated_images', '*.png'))) == 2: first_face = np.load('./latent_representations/father_01.npy') second_face = np.load('./latent_representations/mother_01.npy') print("Generation of latent representation is complete! Now comes the fun part.") else: raise ValueError('Something wrong. It may be impossible to read the face in the photos. Upload other photos and try again.')

8. 生成一家三口照片

請修改下面程式碼中的 gender_influence 和 person_age參數,

gender_influence:性別影響因子,取值範圍[0.01, 0.99],取值越接近0,父親的容貌影響越大,反之母親影響越大;

person_age:年齡影響因子,取值範圍[10, 50],設定該值後,將生成對應年齡的小孩的容貌。

每次修改該參數值後,重新執行下面的程式碼塊,即可生成孩子的新照片

genes_influence = 0.8 # 性別影響因子,取值範圍[0.01, 0.99],取值越接近0,父親的容貌影響越大,反之母親影響越大 person_age = 10 # 年齡影響因子,取值範圍[10, 50],設定該值後,將生成對應年齡的小孩的容貌 style = "Default" if style == "Father's photo": lr = ((np.arange(1, model_scale + 1) / model_scale) ** genes_influence).reshape((model_scale, 1)) rl = 1 - lr hybrid_face = (lr * first_face) + (rl * second_face) elif style == "Mother's photo": lr = ((np.arange(1, model_scale + 1) / model_scale) ** (1 - genes_influence)).reshape((model_scale, 1)) rl = 1 - lr hybrid_face = (rl * first_face) + (lr * second_face) else: hybrid_face = ((1 - genes_influence) * first_face) + (genes_influence * second_face) intensity = -((person_age / 5) - 6) resolution = "512" size = int(resolution), int(resolution) download_image = False face = generate_final_image(hybrid_face, age_direction, intensity) plot_three_images(face, fs=15)

9. 檢視孩子各年齡段的容貌

請修改下面程式碼中的 gender_influence 參數,該參數是性別影響因子,取值範圍[0.01, 0.99],取值越接近0,父親的容貌影響越大,反之母親影響越大。

每次修改該參數值後,要重新執行下面的程式碼塊

gender_influence = 0.8 # 性別影響因子,取值範圍[0.01, 0.99],取值越接近0,父親的容貌影響越大,反之母親影響越大 !rm -rf ./for_animation !mkdir ./for_animation face_img = [] hybrid_face = ((1 - gender_influence) * first_face) + (gender_influence * second_face) animation_resolution = "512" animation_size = int(animation_resolution), int(animation_resolution) frames_number = 50 download_image = False for i in range(0, frames_number, 1): intensity = (8 * (i / (frames_number - 1))) - 4 generate_final_images(hybrid_face, age_direction, intensity, i) clear_output() print(str(i) + " of {} photo generated".format(str(frames_number))) for j in reversed(face_img): face_img.append(j) automatic_download = False if gender_influence <= 0.3: animation_name = "boy.mp4" elif gender_influence >= 0.7: animation_name = "girl.mp4" else: animation_name = "animation.mp4" imageio.mimsave('./for_animation/' + animation_name, face_img) clear_output() display(mpy.ipython_display('./for_animation/' + animation_name, height=400, autoplay=1, loop=1))

10. 檢視孩子不同性別的容貌

請修改下面程式碼中的 person_age 參數,該參數是年齡影響因子,取值範圍[10, 50],設定該值後,將生成對應年齡的小孩的容貌。

每次修改該參數值後,要重新執行下面的程式碼塊

person_age = 10 # 小孩的年齡,取值範圍[10, 50],設定該值後,將生成對應年齡的小孩的容貌 !rm -rf ./for_animation !mkdir ./for_animation face_img = [] intensity = -((person_age / 5) - 6) animation_resolution = "512" animation_size = int(animation_resolution), int(animation_resolution) frames_number = 50 # 容貌變化的影像數,取值範圍[10, 50] download_image = False for i in range(1, frames_number): gender_influence = i / frames_number hybrid_face = ((1 - gender_influence) * first_face) + (gender_influence * second_face) face = generate_final_images(hybrid_face, age_direction, intensity, i) clear_output() print(str(i) + " of {} photo generated".format(str(frames_number))) for j in reversed(face_img): face_img.append(j) animation_name = str(person_age) + "_years.mp4" imageio.mimsave('./for_animation/' + animation_name, face_img) clear_output() display(mpy.ipython_display('./for_animation/' + animation_name, height=400, autoplay=1, loop=1))

11. 更換父親和母親的照片

接下來,你可以上傳自己感興趣的父母親照片到father_image 和 mother_image目錄下,重新執行程式碼,即可生成新的孩子照片。

你需要按照如下規則和步驟進行:

1、參考下圖的操作,進入到 ma_share/BabyGAN 目錄;

2、準備一張父親的照片,上傳到 father_image 目錄下,命名必須為father.jpg;(如果你不知道上傳檔到 JupyterLab 的方法,請檢視此文件)

3、準備一張母親的照片,上傳到 mother_image 目錄下,命名必須為mother.jpg;

4、father_image 和 mother_image目錄都只允許存在一張照片;

5、重新執行步驟4~10的程式碼。

本文分享自華為雲社區【BabyGAN:根據父母照片生成孩子照片】,作者: 山海之光。

點選關註,第一時間了解華為雲新鮮技術~