Deepfake technology has gained immense attention in recent years, making it a topic of interest for creators and technologists. If you're curious about how to create a deepfake, this guide breaks down the process into manageable steps. Whether you’re preparing for a demo or simply exploring this fascinating technology, here’s everything you need to know.
Before diving into the steps, ensure you have the right setup:
Hardware:
Software:
In order to help you out with the data extraction, we created a Google Drive with some scripts as detailed in the steps below. In order to get access to the Drive, please send us a message to deepfake@truly.ws
Your dataset is the foundation of the deepfake process. Follow these tips to collect and prepare high-quality material:
Edit and combine your source videos to focus solely on the subject:
workdir/src
directory. Here is how it looks like for Ted Pick, Morgan Stenly CEO: Run the "1_extract_images_from_video" script to extract frames from the video. These frames serve as the building blocks for your deepfake.
Use the "2_faceset_extract" script to extract faces.
For the parameters, pick 256 for image_size, and 100 for quality.
Apply XSeg masks to the dataset usings "3_XSeg_mask_apply".
Use the script "4_1_data_src_util_faceset_pack" for packing the entire aligned faceset into one file for fast loading.
This is where the deepfake magic happens.
dfm
.Install deepfacelive with the installer given in our drive “DeepFaceLive_NVIDIA_build_07_09_2023”.
Go into the directory created "deepfacelive"
, then to userdata/dfm_models and place the renamed DFM file you have into this folder.
Under Camera Source, choose your camera in the devide_index and wanted resolution to stream with. we recommend 640x480.
Choose the following settings for face detector:
In Face aligner, choose “from points” under mode:
Use Goggle FaceMesh marker, with your GPU device:
Now go to faceswap DFM and choose the model you have put in the dfm_models directory
In the “Face Merger” section, select the color transfer option as “rct.”
Stream the output by choosing “Stream output” with “Merged frame” as the source and setting “Target delay” to 0.
Click on the “Window” to view the output on a larger screen.
And that’s it - you’re all set!
By following these steps, you'll be equipped to create and demonstrate deepfake technology with confidence. While the process requires attention to detail and powerful hardware, the results are a fascinating exploration of AI and creativity.
Remember: With great power comes great responsibility. Always use deepfake technology ethically and transparently.