This article is based on using the Insta360 ONE X2. Let’s say you want to make a video tour of your house. You’re not a pro, you don’t want to spend a lot of money, you don’t really know what you’re doing, but you do want a video tour of each room of your house for marketing purposes, for example. Here’s an example of a house tour but only showing one room (area) – the backyard.
Here’s one easy way to do it using the Insta 360 ONE X2.
Put the camera on a tripod in a room
For this example, I put the camera in the backyard as shown below.
Start recording and leave the room
In post editing, we’ll trim the beginning of the video so you’re not in it.
Wait a while, e.g. 40 seconds
It’s up to you how long you want to wait. In the video above, the duration is 18 seconds. If you mess up and the video is too short, you can slow it down in post editing to twice the duration or 4 times the duration. In the example backyard video above, the duration is 37 seconds.
Go back into the room and stop recording
In post editing, we’ll trim the end of the video so you’re not in it.
Transfer the video to your computer
I just use a USB-C cable to transfer the video. Note that each video has 3 files because the video is unstitched and has the proprietary .insv (Insta360 Video) file extension.
Edit the video in Insta360 Studio
Open the video (you can just open one of the 3 insv files) and start editing.
Enable Flowstate Stabilization (although maybe that’s not necessary since the camera is static on a tripod)
Move the left trim marker to where you want the video to begin (the point after you’ve left the room)
Move the right trim marker to where you want the video to end (the point before you reenter the room)
Set the aspect ratio to 16:9 (standard TV screen size)
Add 5 keyframes (indicated in yellow circles) on the timeline with the following specs
Keyframe 1
Timestamp = beginning of video in timeline Pan angle = 0° View = Natural view
Keyframe 2
Timestamp = 25% of the duration of the video from the beginning Pan angle = 90° View = Natural view
Keyframe 3
Timestamp = 50% of the duration of the video from the beginning Pan angle = 180° View = Natural view
Keyframe 4
Timestamp = 75% of the duration of the video from the beginning Pan angle = 270° View = Natural view
Keyframe 5
Timestamp = end of the video in the timeline Pan angle = 360° View = Natural view
Insta360 Studio converts 360° to 1×0°.
Choose a transition
Between each keyframe, you can click on the yellow line to pick a transition type. The default is “Smooth Dissolve” which is recommended.
Adjust video speed
If the video is too short or too long, you can slow it down (2x or 4x) or speed it up (2x, 4x, 6x, 8x, 16x, 32x, 64x). Just click the lightning icon and drag from the beginning to the end of the clip. Then click on the pink bar to change the speed.
Export the video
In the dialog box, you can choose either H.264 or H.265.
H.265 produces a smaller file size but takes longer to render compared to H.264 for the same level of video quality.
For example, for an 18 second video
H.264 – 85 MB
H.265 – 55 MB
If you’re just going to upload the video to YouTube, then you might want to just pick H.264 since the time to upload a larger file may be quicker than the extra time to render in H.265. Rendering the video will stitch the video together and produce a mp4 file which can be opened and viewed in most video applications.
Repeat for each room or area of the house
Once you’ve editing the videos for each room and exported them as MP4s, you can use any regular non-360 video editor to combine the clips together, add background music, text overlays, etc.
Here’s an example of the master bathroom with additional rotation to show the ceiling.
Again, the camera was just placed in the center of the room.
Have you ever wanted to take a picture of something but weren’t sure if it was allowed or felt unsure if people would be offended if they saw you point your camera at them? If so, one way around this is by taking pictures using a 360° camera like the Insta360 ONE X2.
Since the camera can take a picture of everything around you, you can point the camera at a 90° angle or 180° away from the object you want to take a picture of. Then, in post editing, just drag the picture around to face the object of interest and click the snapshot button to export a regular, flat picture. By pointing your camera away and looking away from your object of interest, no one would know that you’re actually taking a picture of something else.
This is my usual workflow when filming and editing 360 videos using the Insta360 ONE X2/X3 and Corel VideoStudio Pro.
Hold or Mount the Camera
Selfie Stick in Hand
When I’m walking, I like to hold the selfie stick in my left (or right) hand with the stick extended such that the camera lens is at face height. Optionally, the point of interest could be behind me. In post-production reframing, I could rotate the view to sometimes face me and sometimes face some other direction.
Selfie Stick on Tripod
In certain situations, I’ll turn the selfie stick into a tripod and place it on the floor/ground or on a table.
Chest Mount (Body Cam)
When I don’t want to hold the camera for a long time or when I don’t want people to know that I’m filming them, e.g. when boarding an airplane, when standing in a subway car or bus, when buying something, etc, I’ll mount my camera to my chest.
This mode lets you push the shutter button to do two things at once
Turn on the camera and start recording
Turn off the camera and stop recording
While traveling, this can be really handy since you won’t have to waste time clicking two different buttons and waiting in between.
The recording mode will be the last mode used. If you switched the mode to timelapse and then turned off the camera, then the shutter button will turn on the camera and start recording in timelapse mode, not regular video mode.
Enable Standard Video Mode
This mode is for regular video shooting. Other modes are HDR, Timelapse, Timeshift and Bullet Time. HDR is only good for when you are filming on a tripod and not moving.
Disable Prompt Sound
By default, when you turn on and off the camera or start and stop recording, you will hear an annoying beep. Disable this “Prompt Sound”. To know whether the camera is on, off, or recording, just look at the light status.
Light off = camera off
Light solid blue = camera on
Light slowly flashing red = filming in progress
Press The Shutter Button to Start Filming
Fingerprints on the lenses can result it blurry videos. Always wipe the lenses with a clean lens cloth before filming.
When you are out and about traveling and you want to start filming, just press the shutter button once. Since Quick Capture will be enabled, you won’t need to turn the camera on first. Note that there is a bit of a delay before the camera actually starts recording.
Press The Shutter Button to Stop Filming
When you are done filming, just press the shutter button once again. Since Quick Capture will be enabled, you won’t need to turn the camera off as it will turn off automatically (and save battery).
Note: The latest Windows version of the Insta360 Studio app is version 4.3.0, which came out on 2022-05-24. The previous version is 4.2.2. I have found version 4.3.0 to have significant performance issues so I have reverted back to version 4.2.2 which performs just fine. See Insta360 Studio versions
Connect a USB cable between the camera and your computer. On Windows, it auto-detects the camera as an external drive. You can then copy and paste the .insv files. Note that each video contains 3 .insv files. For example, below there are actually only 2 videos.
Note that transferring video files from the Insta360 ONE X2 to your computer can be really slow. For example, when I transfer files from the Insta360 ONE X2 to my laptop over USB C, it transfers at about 30 MB/s.
Video files are huge. You may not have enough space on your laptop to store video files. And even if you did, you should have a backup, e.g. on an external SSD. I like the SanDisk Extreme 1TB Portable External SSD Flash Storage Drive, which claims to have a data transfer rate of up to 1050MB/s.
Import the .insv Files into Insta360 Studio
In the left pane, you will see thumbnails for all uploaded videos. .insv files are unstitched 360-degree files that can only be opened in the Insta360 Studio or mobile app.
Reframe the 360-Degree Video
In the right panel, you will have many options. Enable FlowState Stabilization so that the video isn’t jerky. If you want the video view to always face one direction (front), then enable Direction Lock.
At the top, you’ll see two icons. One for edit mode and one for view mode. Ensure edit mode is on. In the video preview window, you’ll see an option to change the video aspect ratio. The most common is 16:9, e.g. for TVs and YouTube.
To reframe the 360 video, you’ll need to first add keyframes at each timestamp where you want to change the angle and lens of the video. Drag the white vertical playback bar to the very beginning of the timeline and click the + icon.
This will add a yellow circle at that timestamp, indicating that a keyframe is there. Also, the + icon turns into an x icon in case you want to delete that keyframe. Clicking on the keyframe shows options where you can choose the lens type, e.g. fisheye, etc. Choose “Natural View”. Then, drag the video preview in any direction you want so that the video beginning at that keyframe will point in that direction.
When you click “Natural View”, the Field of View (FOV) value will change to the default value of 90 degrees.
Following is what the FOV looks like at 90 degrees.
You can increase the field of view (like zooming out) and decrease the field of view (like zooming in). Since the ONE X2 is a 360 camera with two 180-degree lenses, you can increase the FOV to a max of 180 degrees (actually 179 degrees). You’ll end up getting a circle like the one below.
The smallest FOV is 1 degree, which results in such a zoomed-in view that is just blurry, like below.
Drag the playback bar to another timestamp and repeat the steps above. You will see a yellow line connecting the two keyframes.
Clicking on the yellow line will allow you to choose a transition between the two keyframes. “Smooth Dissolve” is a good transition. If you choose “None”, for example, and the camera angle is facing the sky in keyframe 1 and facing the ground in keyframe 2, then at the beginning of keyframe 2, the video will jump from facing the sky to facing the ground. With the “Smooth Dissolve” transition, the video will transition slowly from facing the sky to the ground.
Click the lightning icon to enable Timeshift.
Then drag in the timeline where you want the timeshift to occur and choose a speed from slowing down at 1/4x speed to speeding up to 64x speed. Sections that are timelapses will have their audio muted.
Note: Creating a timelapse in a video editing tool like Corel VideoStudio is easier and comes with more options. Also, it preserves the audio.
When you are done adding keyframes, setting camera angles, and adding timeshifts, adjust the picture color, if necessary. Click the media processing icon and compare the color when Color Plus or AquaVision 2.0 is enabled.
Below, you can see the difference in color between the default color and with Color Plus or AquaVision 2.0 enabled.
The AquaVision 2.0 setting is for taking underwater pictures. It produces a brighter picture. The Color Plus setting produces vivid, more saturated colors. It especially helps improve skin tone when the subject is in a shade.
Note: Adjusting color in Corel VideoStudio is better as you can tweak the color settings.
When you’re done adjusting for color, click the yellow Export button on the right of the timeline.
This will give you many options. Choose “Reframed Video” and leave the default as H.264. If you choose H.265, the video won’t open in certain programs like Corel VideoStudio Pro.
When exporting videos in the Insta360 Studio app, I tested various bitrate settings from 1 mbps to 200 mbps (max). I saw no difference in quality between 25 mbps and 200 mbps. I also did not see a difference in file size and quality between h.264 and h.265. So, I choose h.264 at 25 mbps.
Then, click the Start Export button.
Tweak video color in Corel VideoStudio
In VideoStudio 2018 Pro/Ultimate, double-click on a video clip in the timeline. In the Correction tab, slide the Gamma slider to the right to lighten the video. This may throw the white balance off. To fix the white balance, check its checkbox, click “Pick Color,” and click on a pixel in the video that should be pure white, e.g. a white napkin.
In VideoStudio 2018 Pro/Ultimate, double-click on a video clip in the timeline. In the Color tab, choose Tone Curve and drag the curve. Usually, dragging towards the top left produces a brighter picture.
Edit video in Corel VideoStudio 2022
Note: Corel VideoStudio 2018 used to work fine but, at least for me, it now hangs even after installing and running it on a brand new computer. Corel VideoStudio 2022, however, works fine.
This step assumes you have already reframed and exported the videos.
Under Settings, Smart Proxy Manager, ensure “Enable Smart Proxy” is checked. This will cause VideoStudio to create a small proxy video for videos that are large. This will help with video editing performance.
In the Edit tab, create a project, e.g. Korea, and drag all assets to it (photos, videos, audio, etc).
If the assets are in the order you want them inserted in the timeline, then select multiple assets, right-click, and choose “Insert To > Video Track”. This will make it quick and easy to insert multiple assets at once.
Go to Settings > Smart Proxy Manager > Smart Proxy Queue Manager. You may see a window like the one below. If you see video files in the list, that means VideoStudio is in the process of creating proxy video files. When it’s done, the list will be empty. Until it’s done, leave VideoStudio alone since editing before it’s done could be slow and possibly crash the program.
If you’ve added a song to the Music track, and you want to lower the volume of the song for a section of a video clip, e.g. when someone is talking, then do the following:
Click the sound mixer icon
Then tracks will change like below.
The white line is the audio volume line. Notice how on the music track, the volume line goes down at one point and then up at another. This was done to lower the volume of the song during that time range only. To lower the volume, move the playback marker to the point along the orange line where you want to change the volume. Then, in the sound mixer, change the dB value for the track you want to change the volume of. In the screenshot below. the music track volume was lowered to -20 dB,
VideoStudio will gradually change the volume from one point to another. If you want the volume to change immediately, then you’ll need to add another marker next to the first one.
If you want to speed up a video clip, right-click on the clip and choose Speed > Speed/Time-lapse…
This will open a dialog like the one below. Change the duration of the new clip, e.g. from 18 seconds to 8 seconds, and then click the Preview button. If the preview looks good, click OK to apply the timelapse.
Before you export the video from Corel VideoStudio, note that the input video clips exported video from Insta360 Studio have 29.97 frames (still images) per second (fps).
Corel VideoStudio may default to choosing export settings of 30p (30 fps). If you choose this setting, the audio will not be in sync with the video.
Corel VideoStudio doesn’t have a preset for 29.97p (29.94 fps).
So, you’ll need to create your own profile preset and choose a frame rate of 29.97 fps.
Workflow Summary
Take 360 videos
Transfer videos to SanDisk external SSD drive
Open 360 videos in Insta360 Studio. 360 videos have huge file sizes.
Take snapshots, if going to make video of pics.
Convert them to natural view mp4s which are much smaller. For each video,
Enable ColorPlus, if necessary
Set start and end positions, if necessary
At start position, set the field of view (FOV) to “Natural View”
Add additional FOV points to change viewing angle, as necessary
Click the Export button, chooses “Reframed Video”, set bitrate to 25 Mbps (25000 kbps), h.264 (not h.265), and click “Add to Queue”
When done editing each video, click “Export All”
When the batch of 360 videos has been converted to mp4s, select them all in Insta360 Studio, right-click, and select “Delete Original File” to delete the large 360 videos.
If you want to create a video of an animated character that moves its head and lips as you move your head and speak, you can do so easily using Adobe Character Animator. Here’s how.
In Adobe Character Animator, click File > Import and select the puppet file.
2. Import a green screen
Since we’ll want to overlay the exported character animation on other elements in a video editing program, we’ll want to add a green screen so we can key it out. Create a solid green image (RGB = 0,255,0) the size of the scene, e.g. 1920 x 1080. Then, import it and drag the imported item to the lowest layer it the Timeline panel.
3. Enable Puppet Track Behaviors
We can tell Adobe Character Animator which parts of our face and body to track as we move and talk in the camera. Click on the puppet layer to reveal the Puppet Track Behaviors panel.
The red button indicates that the particular item will be tracked when you move in front of the camera. For example, the Face item, when expanded, will show a red dot by “Camera Input” meaning if move your face in front of the camera, your facial gestures will be tracked and the puppet’s face will move accordingly.
For the lip sync item, the red dot is by “Audio Input” so if you speak, the microphone will capture your voice and convert it into lip movements on your puppet.
For Adobe Character Animator to track your head and lip movements, you need to enable your camera and microphone. You’ll see a circle where your face should be centered in your resting position. Once centered, click the “Set Rest Pose”. You’ll then see a bunch of red dots on your face indicating points where Adobe Character Animator will track your facial gestures.
5. Start recording
Click the red record button. A 3 second countdown timer will begin. Start talking naturally and when you are done, click the red button again to stop recording.
You’ll then see some layers added to the timeline including your voice audio layer.
If some of the layers are longer than the audio layer, e.g. the puppet, Visemes and green screen layers in the screenshot above, trim the scene so the duration of the scene is the duration of the audio. Drag the right end of the gray Work Area bar to the right end of the audio track. Then, right click on that bar and click on “Trim Scene to Work Area”.
Now, your scene duration will just be the duration of the Work Area, in this case 5:20.
6. Preview and export the result
Click the play button to preview the recording. If you are happy with it. you can export it by clicking File > Export > Video via Adobe Media Encoder. This will open Adobe Media Encoder. In the Queue panel, choose a format (h.264) and preset (Match Source – High bitrate or YouTube 1080p Full HD). Then, click the green play button to start encoding.
You will see the encoding progress in the Encoding panel. You’ll also see the video duration as 5:21 seconds as that is the length of the scene in this example.
So, in my 2 story house my internet modem is in the family room in the back of the house. The internet comes over coaxial cable by Comcast xFinity 1Gbps. There is a security camera at the front of the house facing the driveway. Every now and then, the security camera would go offline. To spread wifi all over the house, I have the tp-link deco M9 plus AC2200 mesh wifi router (3 wifi access points). The backhaul between access points is wifi, unfortunately. I can’t have a wired ethernet backhaul between access points because running ethernet cable would require opening up walls which is a lot of work. Fortunately, however, there is existing coaxial cabling throughout the house. So, I can use MoCA (Multimedia over Coax Alliance) adapters to bridge ethernet over coax so I can have a wired coax backhaul between access points. This allows the wifi signal at each access point to be much stronger than with a wifi backhaul. There are many diagrams and tutorials online but none that I found were clear enough hence this blog post. Below is my setup with a diagram which should make it clear what goes where.
The continuity tester doesn’t work through splitters. Once you’ve tested all cables, you can label them in your junction box like I did below. As you can see, there is a 1 – 2 splitter where the one input is the coax cable from xfinity. The two outputs each go to the master bedroom and family room.
I added a new coax cable to go to the garage but it’s not connected in the picture because I need to add another splitter or replace the existing splitter with a 1-3 (or more) splitter. For MoCA to work, you need a splitter that
is not amplified
goes up to at least 1.5 Ghz (1500 Mhz)
Before and After
Powerline Adapters
You can also bridge ethernet over your home’s existing electrical wiring using Powerline adapters, e.g.
However, these adapters don’t work if there’s a surge suppressor. Also, there’s a lot more activity in your home electrical wiring that could interfere with the signal, e.g. from the refrigerator, hair dryers, air conditioners, washing machines, and other appliances.
It’s pretty clear now that mesh networks produce stronger wifi signals throughout larger spaces when compared to regular wifi routers even with range extenders. But, many mesh networks only instruct users to connect each router over wifi. While this may be fine in some situations, e.g. where you can’t run ethernet between a main router and a satellite, having a wired backhaul produces a much better wifi signal coming out of the downstream satellite router. Here are instructions to set this up using the TP-LINK Deco AX1800 X20 (W3600 if you bought it from Walmart).
Steps
Restart the modem
Follow the instructions to set up the main router
Follow instructions to set up the satellite router over wifi (default)
When the light on the satellite turns green, then you know the satellite router is connected to the main router. At this point, since you haven’t connected an ethernet cable between the two routers, the connection is over wifi. You can verify this by opening the Deco app and clicking the satellite router. You should see the “Signal Source” value the wifi symbol followed by 2.4GHz/5GHz.
If you have a laptop or smartphone that is connected to the satellite router, you can run a speed test. In this example, I have a laptop that is connected over wifi to the satellite router. After running a speed test, connect an ethernet cable between the two routers. The light on the satellite router will turn red temporarily and then turn green when connected. Similarly, the satellite router’s status in the app will appear disconnected. Click the refresh button and you should see the “Signal Source” value change to “Ethernet” and shown in the screenshot below. This confirms that you are using a wired backhaul.
Now, run a wifi speed test from the satellite router. In my case, my laptop was still connected to the satellite router. The wifi speed test results were clearly much faster when the satellite was connected over ethernet rather than wifi. This setup is very useful when you need a strong wifi signal very far away from your main router and you can run an long ethernet cable between routers.
Note: I got this 2 router TP-Link AX1800 mesh wifi system from Walmart for $129. At Walmart, the model number is W3600 whereas on Amazon it’s X20.
If you’re on Google Fi and landed in a another country and turned off Airplane mode and you get no data, try the following (with wifi off so you don’t get confused).
Open the Google Fi App
The app should welcome you to the country you have just landed in. However, it may also say “You’re offline”.
Update Settings
Go to Settings > Network & internet > Mobile network >
Ensure “Mobile data” is enabled
Ensure “Roaming” is enabled
Click “Advanced” and disable “Automatically select network”. You will then see a list of local cellular networks. Some may be G, 3G and LTE. Pick one that works, preferably a faster one.
At this point, you should be able to connect to the internet on your phone.
Modular and therefore can add modules that offer different / better features
HDR (high dynamic range) for better image quality
More advanced desktop editing software
Cons:
Modular and therefore can be a hassle to have to switch modules, especially quickly in order to capture a moving target
GoPro Max
Pros:
Easy to use without having to assemble modular parts
Cons:
No HDR (high dynamic range)
Desktop editing software not as powerful as the Insta360 Studio
Insta360 One X2
Pros:
Small
HDR (high dynamic range) for better image quality
Ricoh Theta SC2
After testing the GoPro Max, Insta360 One X2, and the Ricoh Theta SC2, it clear that the Insta360 One X2 is the better camera.
Virtual Reality / 3D Panorama Software
Marzipano
Marzipano is free and open source. You can use the Marzipano tool to quickly upload 360 photos and then download a complete website with all code to host yourself. However, you can only zoom out so much as shown in the screenshot below.
Kuula
Kuula lets you upload 360 photos and embed a 360 viewer of your photos on your website. You can also zoom out much more than with Marzipano as shown in the screenshot below.
You can then take a screenshot of the zoomed out 360 photo which doesn’t show very warped and curved lines.
Metareal
Metareal is a great alternative to MatterPort. You can create floorplans as well and pay a nominal fee to have Metareal convert your 360 photos into virtual tours for you.
Photoshop
In Adobe Photoshop, you can import a 3D panorama photo
In the lower left corner, when you have the white grid enabled, you will see orbit, pan and dolly buttons to move the image around.
Under Properties, you can adjust the Vertical FOV (Field of View) to zoom in and out.
GoPro Player Desktop App
The GoPro Player desktop app will also open 360 photos and let you rotate and zoom in and out. But, unlike Photoshop and Kuula, you’ll get a fisheye view as shown below.
Google Photos Mobile App
The Google Photos mobile app has a Panorama feature but you have to move your camera horizontally or vertically to capture create the panorama. It’s not a full 360 degree panorama but it does support scrolling in Google Photos.
Insta360 Studio
The Insta360 Studio desktop app is definitely better than the GoPro Player desktop app. It’s got more features and is intuitive to use.
People’s faces vary significantly from one to another and with time as they age. Some men lose hair, some women pluck and lose their eyebrows or change the shape of their eyebrows, some men change their beard or mustache style, and last but not least, some people have or develop some natural or accidental issue with their nose, whether it’s crooked, asymmetric, bumpy, droopy, too large, or so shallow that they can’t comfortably where glasses.
For men, the most common operation is probably a hair transplant. For women, the most common operation is probably rhinoplasty (nose job) although many women should probably just get an eyebrow transplant instead of drawing their eyebrows on their skin which looks obviously fake.
Interesting fact: Iran has the highest rate of nose surgery in the world, and according to a report in the conservative Etemad newspaper, as many as 200,000 Iranians, mostly women, go to cosmetic surgeons each year for a nose job. Source
This article explains one way to edit a 3D version of your face. It can be helpful if you are just curious about what a change may look like or if you are trying to explain your desired outcome to someone.
The following image shows the photos I started with (left column), the 3D faces generated from the photos (middle column), and the 3D faces after editing (right column).
1. Take a photo of someone’s face
For demonstration purposes, I took a screenshot of a 3D image of a random person on Sketchfab. You can take just a front photo but it’s better to take pictures of both sides as well.
When taking photos, you should look straight and not tilt your head. You should have neutral gestures (no smiling, etc), and you shouldn’t wear glasses.
2. Load the photos into FaceGen 3D Print
Download FaceGen 3D Print. You can download the demo version. You won’t get all of the features but you may not need all the features. The cheapest paid version costs $69. Install the program, click Create > Photo > and upload the photos from step 1.
3. Mark specific points
FaceGen will then instruct you to mark specific points on your photos so that it can better generate a 3D image.
4. Generate 3D image
After you click the “Create from photo(s)” button, FaceGen will take about 30 seconds to analyze the photos and then generate a 3D image. This technique of generating a 3D image from photos is called Photogammetry. If you have a 3D scanner, you can also load a 3D image.
You can drag the 3D image around in any direction. Since I’m using the demo version, there is a blue FG (FaceGen) watermark on the image.
5. Overlay original photo to check accuracy
The generated 3D image may not be perfect. To fix that, we can overlay our still photos on the FaceGen window and tweak the 3D image to match the photos. One overlay utility that works is called Overlay. After installing it, load your still photo, drag the Overlay window over the FaceGen window, scale the overlaid photo so that the face elements of the overlaid photo and the underlying 3D image are almost the same. You can then see whether the generated 3D image is sufficiently accurate or needs tweaking.
6. Edit the 3D image to match the photos
Click the “Float” button in the Overlay controls. Then, in FaceGen, click Modify > Interactive and edit the 3D image as follows:
Hold down the ‘Ctrl’ key then left-click and drag any point on the face (symmetric).
Hold down the ‘Ctrl’ key then right-click and drag any point on the face (asymmetric).
Symmetric will make changes symmetrically, e.g. if you edit the left eyebrow, then the right eyebrow will get the exact same edits. If you only want to edit one side / location, then use the asymmetric option.
In this step, your goal is to just tweak the 3D image to more closely match the photos.
7. Edit the 3D image to your desired result
After tweaking the 3D image to match the still photos, you can start editing the 3D image to your desired transformation using the same technique as in the previous step. Following are some extreme examples for demonstration purposes.
If you click on Modify > Shape, you can modify preset facial elements, e.g. nose nostril size, etc.
8. Further editing
Though FaceGen has many features, it seems to lack the ability to modify 3D images in certain ways. For example, one complaint many people seem to have is of a hump on their nose.
FaceGen doesn’t seem to have a way to reshape a hump like that. To resolve this, export the 3D image out of FaceGen as an OBJ file.
Then, download AutoDesk MeshMixer. It’s free. Install MeshMixer and import the 3D image you exported from FaceGen. With MeshMixer, you can sculpt your 3D image, e.g. click Sculpt > Brushes > Drag, adjust the strength, size, depth, etc of the brush, and then drag on the 3D image. Since my demo model didn’t have a hump on the nose, I created (an exaggerated) one. Note that all of this editing is in 3D so you can rotate the image around.
Another tool you can use is FaceTouchUp. But, it only works with 2D flat images, which, depending on your needs / goals, may be sufficient.
8. Upload 3D image for sharing
After you export your 3D image as an OBJ file, you can upload it to Sketchfab where you can share it with others. For example, below is an embed of the 3D image I took for this demo.
If you took a screenshot of the 3D image before and after you made edits, you can use a morphing program to show the transformation from the before state to the after one.
3D Scanner
Generating a 3D image from still photos works pretty well. But, you can also create a 3D image of your face (or any object) using a 3D scanner. Revopoint POP 3D Scanner ($500) is one such scanner. It’s supposed to generate a more accurate 3D model by using infrared light to calculate depth. However, it doesn’t capture anything in black so if you have black hair or a black beard, it will not pick those up.
This type of light is used to highlight certain subjects or stage pieces with a relatively narrow beam angle.
Parabolic reflectors (PAR)
This type of light is used to light up large areas. They come in a variety of lens types to get different beam angles. This light doesn’t have zoom or focus options. This is the most common fixture because it’s the cheapest.
Fresnel
This type of light is a happy medium between a PAR and an ERS. They have a zoom function but not a focus and usually cast a much “softer” light than ERS light fixtures.
Moving Head
This type of light can move. It offers different beam angles for spot (narrow), wash (wide), beam (laser) and hybrid light effects. It is the most versatile stage lighting option.
Above are only some of the more common types of lights.
To hang your lights, you can get a lighting stand with T-Bar.
DMX Interface
DMX (Digital Multiplexing) or, officially, USITT DMX512, is a unidirectional serial data protocol, meaning the signal leaves the controller (computer or lighting board) and travels through all lighting fixtures in a daisy-chain. It was standardized in 1986. DMX networks typically only have one master device on the network, usually the DAW software / controller, and many slave devices — the lights.
DMX Cable
The 5-pin XLR the standard connector.
The reason for five pins is that pin 1 would be the ground, pins 2 and 3 would be data link 1, and pins 4 and 5 were reserved for data link 2 and/or proprietary data. Over the years, the second pair of pins (pins 4 & 5) on the connector stopped being used, since 3-pin DMX proved to be very reliable. This is why you may sometimes see fixtures with a 3-pin, 5-pin, or both connectors on the fixture.
DMX vs Audio/Mic XLR Cables
Some DMX cables are 3-pin cables. Don’t confuse them with 3-pin audio or mic cables. DMX cables use roughly 110-ohms whereas microphone cables are typically around 45 ohms. The different impedance between these cables matters with lighting networks and can cause your lights to either not respond or respond sporadically.
Number of light fixtures per DMX cable
You cannot have more than 32 devices connected on a single chain. If you have more than 32 light fixtures, you would need to use an Opto-Splitter. A splitter like the Chauvet DJ Data Stream 4 will allow you to have 32 devices connected to each DMX output connector. You cannot use Y-cables, as this approach does not electrically isolate the DMX lines and would cause data reflections.
DMX Channels / Universe
A DMX line is limited to a total of 512 channels, which is also called a universe. Each lighting fixture you have uses a number of DMX channels depending on how many parameters the fixture has. Lights can also have multiple personalities, or profiles, depending on how much or how little control you want. Note that the 512-channel limit is independent of the 32-light fixture limit.
Let’s say you have 40 lighting fixtures that use three channels each: you are only using 120 channels total. You can fit these all in the same universe of control, however, if you have more than 32 devices. You would implement an Opto-Splitter and split your devices up among the outputs in whatever configuration you would like, as long as each DMX leg has less than 32 devices on it.
Example
Let’s look at the Chauvet DJ SlimPAR Pro H.
This light has three different personalities, or profiles. It can be used in a 6-, 7-, and 10-channel mode, and again the more channels a fixture uses, the more control you have. Let’s look at 7-channel mode:
Channel 1: Dimmer
Channel 2: Red
Channel 3: Green
Channel 4: Blue
Channel 5: Amber
Channel 6: White
Channel 7: UV
Each DMX parameter on a fixture operates independently. Say I was to make this fixture a magenta color. I would turn up channel 2 (Red) and channel 4 (Blue) until I got my desired shade of magenta. However, turning up just these channels on the fixture (2 & 4) would not put out any light. I would also need to turn up channel 1, which is my dimmer that controls the overall intensity. On moving fixtures, this control becomes even more complex, because there are other parameters available, such as Pan and Tilt or gobos, again all independent.
The best way to understand a light’s capabilities is by checking its DMX assignments. For example, the ADJ Starbust’s manual shows this.
Addresses
When setting up a lighting rig, each light fixture needs to be assigned a starting address. If I have four of the same fixture mentioned above in the same personality (7-channel mode), their addresses would be 1, 8, 15, and 22. All 512 channels of data flow through every fixture in a DMX lighting chain so each fixture needs to know which channels control it based on channel addressing.
There are many different DAW software, for example, Ableton Live and Pro Tools. They can be used to create music and control lights. This article isn’t about creating music but just controlling lights in sync with existing music. Therefore, the software we’ll use is Show Buddy.
Setup
To sync light effects with music (e.g. an existing mp3 file), we’ll use this setup.
USB to DMX Controller Interface to connect laptop to lights
DMXIS software to control lights / create light effects
Unlike cheaper USB to DMX interfaces, DMXIS has an on-board controller to generate data into DMX format which is much more reliable than letting the computer generate the data.
Show Buddy software to sync light effects with music – $119
DMXIS light controller software is required to run Show Buddy
Terminate the DMX cable chain by inserting a DMX terminator into the DMX Out port of the last light fixture.
DMXIS Software
Important terminology
Show (group of songs) You can create multiple shows. A show typically is the name of a list of songs, e.g. Yanni Concert
Bank (a song) You can create multiple banks per show. A bank can be the name of a song, e.g. Santorini
Preset (a light effect) You can create multiple presets per bank. A preset can be the name of a light effect. You can create multiple presets (light effects) for a bank (song) in order of when you want the preset (light effect) to occur during song playback. You can drag presets up and down to order them.
Workflow
Add a light fixture If the light fixture you want to add isn’t in the DMXIS library, you can search for it at http://fixtures.dmxis.com/ where you can download user-uploaded DMX light fixes for importing into DMXIS. If you don’t find your fixture there, you can create and upload DMX light specs for your particular fixture. It is just a text file defining channel.
Assign each light fixture to a starting address, e.g.
PAR light 1 (3 channels) starts at DMX address 1 (channels 1-3)
PAR light 2 (3 channels) starts at DMX address 4 (channels 4-6)
PAR light 3 (3 channels) starts at DMX address 7 (channels 7-9)
PAR light 4 (3 channels) starts at DMX address 10 (channels 10-12)
Create a “show”, e.g. “Instrumental Songs”
Create a “bank”, e.g. “Santorini”
Create a preset, e.g. “Red”
Adjust the sliders for one or more light fixtures, e.g. by making them show red light
This software allows you to load audio files (e.g. mp3s) and for each song, place a preset (light effect) created in DMXIS at certain points in the song. Light effects can fade out over a specified amount of time.
Workflow
Add audio files (songs / mp3s) to the Track Library
Choose the DMXIS show to use in the DMXIS show dropdown
Choose a DMXIS bank to use in the list of banks
Choose a DMXIS preset to use in the list of presets
Drag the preset to the wave form at the point you want the preset (light effect) to run
Repeat steps 2-5 as much as needed
Resources
Capture Software
This software allows you to preview light effects.
Workflow
Create a 3D stage
Add one or more light fixtures to the 3D stage
Patch light fixtures (assign them to DMX addresses)
If the light fixture address assignment in Capture matches that in DMXIS, then DMXIS can control the light visualizations in Capture