In this post, I’ll show you how to make an animated travel map like the one below using Apple Keynote.
1. Get an image of a map
I usually just go to Google Maps, zoom in/out to the area I want to show, then take a screenshot. In this example, I took a screenshot of the USA because I want to show an animated flight path from San Francisco to Miami.
2. Crop map and optionally add labels
Open the screenshot in an image editor (I use Photoshop) and crop to your target video resolution. My target resolution is 1920 x 1080 (standard HD). I also added some red dots where the start and end points will be as well as some city labels.
3. Get a transparent image of a plane, car, train, boat, etc
Since I want to show an airplane animate along a path, I looked for an image of one in Google Images. The background should be transparent. In Google Images, you can choose Tools > Color > Transparent to find images on a transparent background.
I chose this image.
4. Create a blank Keynote presentation
Open Apple Keynote and choose the basic white theme.
You will get a single slide. Select and delete everything in the slide.
5. Insert background map
Go to Media > Choose and select the background map.
6. Draw a path
Go to Insert > Line > Draw With Pen and draw your travel path.
Click on the start point then click on the end point. You will get a straight line.
In the middle of the line, there will be a point. Click and drag it up if you want to create a curve. Repeat with other midpoints as necessary.
When you’re done, hit the ESC key. We now have our travel path. Let’s change the style of the path. I’m going to make it red and thick. In the right pane, under Format > Style, you can edit the style of the element (curve). I choose a red color that is 7 pt thick.
7. Animate the path
In the top right corner, choose the Animate tab and then “Add an Effect” > “Line Draw”.
You can then change the default animation from 2 seconds. I changed the duration to 10 seconds so that in my video editor, I can slow it down without it appearing jumpy. I also changed the acceleration to “None”.
Click the “Preview” button to preview the path animation.
8. Add the airplane image
As in step 5, go to Media > Choose and select the airplane image.
Scale the airplane by dragging one of the corners. Drag the airplane to position it at the start point.
Rotate the airplane. In the top right choose Format > Arrange and adjust the rotation value such that the nose of the plane is aligned with the flight path.
9. Animate the airplane
In the top right, click Animate > Action > Add an Effect > Move.
Drag the airplane to the end point. Set the duration and acceleration to match that of the flight path (10 sec, None).
Click Preview to preview the animation. The airplane doesn’t yet follow the flight path. Check the “Align to path” checkbox. A point will appear along the line between the airplane’s start and end points. Drag that middle point to where the flight path is.
Click Preview again. You will see the airplane animate along the flight path.
10. Animate the flight path and airplane at the same time
In the top right, click Animate > Build Out > Build Order.
You will see a list of all animation effects. The first animation is the line (flight path). The second is the plane. Choose te second animation and then under “Start”, select “With Build 1”.
11. Export the animation
Choose File > Export To > Movie.
Since there’s only 1 slide, you can leave “Slides” to “All. The resolution should match that of the background image (1080p).
Log in to AWS and go to EC2 > Instances > Launch an Instance
Enter a name. I’m calling mine “My Web Server”.
For Application and OS Image, I’ll just choose the default, which is “Amazon Linux 2023”.
For Amazon Machine Image (AMI), I’ll choose the default, which is “Amazon Linux 2023 AMI”
Under Key Pair, click “Create new key pair”.
Amazon EC2 can easily create a key pair for you. Just enter a key pair name. I chose “aws-ec2”. OpenSSH is available on Linux, Mac, and Windows 10+, so keep the default key format of .pem. On Windows 10+, OpenSSH is an optional feature you must install. Click the “Create Key Pair” button. The private key will be downloaded to your computer. Keep it in a safe place. You will need it to SSH into your EC2 instance.
In the Network Settings section, since we want to SSH into the EC2 instance and we want to be able to browse our website over HTTP and HTTPS, check those checkboxes.
Leave everything else at their defaults. Review the summary and click the “Launch Instance” button.
You will then see your EC2 instance listed. Wait for the “Status check” to change to “2/2 checks passed”.
Once your instance has been set up, click the button to connect to the instance.
SSH into EC2 Instance
You have a few options to connect to the EC2 instance. For simplicity, choose EC2 Instance Connect. This will open a new browser tab with shell access. Leave the default username as “ec2-user”.
You’ll notice the command prompt changes to ec2-user@ip-172-31-47-114 which is my default username (ec2-user) followed by my EC2 instance’s private IP (ip-172-31-47-114).
Install Apache
Since we installed Amazon Linux 2023, follow these instructions to install Apache. Since we don’t need MySql and PHP, ignore the commands and instructions for those. For example, instead of
Make sure to follow the instructions to set the file permissions so that Apache can serve the website.
The Amazon Linux Apache default document root is /var/www/html
The Apache config is at /etc/httpd/conf/httpd.conf
The Apache logs are in /var/log/httpd/
To view Apache errors, run the following command
sudo tail -100 /var/log/httpd/error_log
Test that Apache works by going to the public IP address WITHOUT “https”, e.g. http://34.229.240.7/.
Set Up SSL/TLS
These instructions show how to create a self-signed certificate and a CA-signed certificate. For a self-signed cert, you don’t need a domain name. You can access your website over https by IP address, e.g. https://34.229.240.7/
For a CA-signed cert, you can follow these instructions to automate certificate renewals using Let’s Encrypt with Certbot. You can also use AWS Certificate Manager to manage and automatically renew certs.
Get a Fixed IP Address
The default IP address that AWS gives you is dynamic (will change whenever the server restarts). To get a static (fixed) IP address, get an Elastic IP Address. Once you get one, try to access your website over https, e.g. https://35.173.7.249/
Put Your Website in a GitHub Repo
Create your website locally in a folder. If you have an existing website under git version control with a lot of history and you want to remove the history, git clone the repo into a new folder, delete the “git” folder, and then run git init.
Clone your GitHub repo to your EC2 instance. I’m going to clone it to my home folder.
In GitHub, get the SSH URL of your repo.
Then, in your home folder, clone it
You will then see a new folder containing your website files from the GitHub repo.
Since my website document root is at /home/ec2-user/my-website/www, we need to update the Apache default document root (/var/www/html) to reference that path by editing the Apache config.
sudo nano /etc/httpd/conf/httpd.conf
Change all references of /var/www/html to /home/ec2-user/my-website/www
Change all references of /var/www to /home/ec2-user/my-website
Restart Apache (sudo systemctl restart httpd)
Update Folder Permissions
If you try to view your website, e.g. by going to https://35.173.7.249/, you will probably get a “Forbidden” error. To better understand this error, view the Apache error log.
sudo tail -100 /var/log/httpd/error_log
You will probably see an error like this
[Sat Dec 23 01:36:55.545345 2023] [core:error] pid 89394:tid 89446 Permission denied: [client 135.125.246.189:49368] AH00035: access to / denied (filesystem path '/home/ec2-user/my-website') because search permissions are missing on a component of the path
To fix this, follow these instructions on how to update file permissions
But replace /var/www with your website document root. In my case, I changed it to /home/ec2-user/my-website
If you’re a non-technical person who is part of a marketing team working for a company that depends a lot on a website, chances are you will often need to ask a team of web developers to make website updates for you. Your particular website may not be easily updated using a content management system (CMS), and even if it could, many non-technical people would rather just send an email to request their website changes. Asking developers to update a website is fine, but only if the update requests are clear. Otherwise, the requestors risk wasting their time and other people’s as well. Unfortunately, the reality is most people don’t know how to clearly communicate their change requests. There are many website annotation tools that claim to be able to simplify the communication process, but in real-world situations, I haven’t found any that were good enough. Plus, adding a new tool requires learning something new, which many people are unwilling to do or don’t have time for.
In this post, I’ll share one approach that non-technical people can use to easily and clearly communicate website change requests to minimize misunderstandings, delays, and lots of back-and-forth messages. And since most people already know and are comfortable using MS Word or Google Docs, this approach only requires a word processor.
Since a picture is worth a thousand words, it’ll be a lot easier to show a screenshot of a section of a web page rather than try to explain the section using words. And since you may want to move some sections around, it’s helpful to number each section. And since you may collaborate with other people in requesting website changes, we’ll use MS Word or Google Docs for our change requests. I’m going to use Google Docs because I find it easier to use.
Create a new Google doc
Give it a name like “Adobe Premier Product Page Changes”.
Change the page margins to 0.25″ on all sides
Under View, uncheck “Show print layout” if it is checked.
At the top of the doc, put the URL to the page, e.g. https://www.adobe.com/products/premiere.html
Insert a table containing 3 columns and 20 rows.
In row 1, cell 1, enter “#”
In row 1, cell 2, enter “SECTION”
In row 1, cell 3, enter “CHANGES”
In the first column, enter a consecutive number in each cell starting from 1 and make it narrow enough just for the numbers
Take a screenshot of each section and paste them in the middle column
In the right column, describe your change request.
Many website files include PDFs. These PDF files are usually much larger than other file types and can take up a lot of space. You may want to keep all website files like images and PDFs (binary files) together with your HTML, CSS and JS files (text files) and put them all in version control, like GitHub, but there are downsides to this:
Git version control is designed for text files, not binary files. Even though you can use Git LFS so you can version your binary files, there are simpler, better alternatives.
Website images are better served from an image CDN like Cloudinary or ImageKit. These services will automatically and quickly optimize images on the fly.
PDF files are better served from a CDN. Amazon AWS S3 can be used to store your PDFs with versioning and AWS CloudFront can serve those PDFs from a CDN. With CloudFront, you can also write a function to redirect one PDF file to another in case you need to delete a file.
The steps below describe how to set up AWS S3 and CloudFront to host PDFs and to set up redirects.
Note: you can create redirects using AWS Lambda functions (launched in 2017), but they are more complicated and cost 6 times as much as the cost of CloudFront functions (launched in 2021). Learn more.
1. Create an S3 bucket
Log in to the AWS console, go to S3, and click “Create bucket”. Choose a bucket name like “pdfs”.
Since you want people to be able to access the PDFs, uncheck “Block all public access” and check “I acknowledge that the current settings might result in this bucket and the objects within becoming public.”
If you want, click the radio button that enables versioning
Ignore the other options, if you want, and then click the “Create bucket” button.
2. Upload PDFs
You can drag and drop your PDFs to upload them. If you have many PDFs, like thousands, then it’s better to use the AWS CLI S3 Sync command.
As a test, I just uploaded 2 PDF files/
3. Create a CloudFront Distribution
In the AWs console, go to CloudFront and click “Create Distribution”. For “Origin domain, choose the Amazon S3 bucket you created in step 1.
For the viewer protocol policy, choose “Redirect HTTP to HTTPS” since that’s a good policy IMO.
Ignore all other options, if you want, and click the “Create Distribution” button.
Now, the PDF files in your S3 bucket will be available in a CDN at the CloudFront domain provided, e.g. d2a5k3j4u1zr32.cloudfront.net/test-pdf-1.pdf
4. Create a CloudFront Function to Redirect Requests
Click on the distribution and then click on “Functions” in the left sidebar.
Click the “Create Function” button and enter a name for the function, e.g. “Redirects”.
You will see 3 tabs: Build, Test, and Publish.
In the “Build” tab, enter the code below and customize as needed.
Note that there is a 10 KB limit on the size of your CloudFront function.
Click the “Save Changes” button and then click the “Test” tab. You will see a field labeled “URL Path” with a default value of “/index.html”.
Since we don’t have a redirect rule for that URL path, we don’t expect any redirection to happen. Click the “Test Function” button. You will see output like below indicated that the response URI is “/index.html” as expected.
Now, change the URL path to one you have a redirect for. In my example code, I am redirecting “/test-pdf-2.pdf” to “https://www.google.com”. Click the “Test Function” button. The output shows “https://www.google.com”.
Now, publish the CloudFront function. Click the “Publish” tab, then click the “Publish Function” button.
Click “Add Association” to associate the function to your distribution. Choose your distribution in the Distribution field. Leave Event Type as “Viewer Request” and ignore Cache behavior. Click the “Add association” button.
Note that you can only have one CloudFront function for a given cache behavior and event type.
Wait for the function to be deployed. Go back to the function list page and check the status column. It will say “Updating” for a few minutes.
Wait a few minutes. Reload the page. The status should change to “Deployed”.
Now, test out the redirect in production by going to the CloudFront URL of a path you have a redirect for. You should see the redirect work.
Using Lambda Functions
Make sure the location is set to us-east-1.
Go to the Lambda page and click “Create function”.
Enter a name for your function.
Under “Execution Role”, choose “Create a new role from AWS policy templates”
Enter a role name
Under “Policy Templates”, choose “Basic Lambda@Edge permissions (for CloudFront trigger)”. This is IMPORTANT. Do NOT choose “Create a new role with basic Lambda permissions”.
In the “Code” tab, enter the redirect code below and then click File > Save.
In order to test your code, you must deploy it first. Click the Deploy button.
Test your code by clicking the “Test” tab
Choose “Create new event”
Enter a name for the test
Replace the event JSON with relevant test data, e.g.
Click the “Save” button and then the “Test” button.
You will either see an error or a success response similar to what’s shown below.
Under “Actions”, click “Deploy to Lambda@Edge”. This will deploy the Lambda function to the CloudFront edge network.
Choose your CloudFront distribution from the dropdown list.
For CloudFront event, choose “Origin Response”.
The green banner will state that the function is being replicated, but that it will take a few minutes to complete.
Go to the CloudFront distribution. You’ll see the status “Deploying”. Wait till it changes to a date/time indicating the deployment has completed.
Invalidate the CloudFront cache for all objects using /*
When the trigger is created, it will create a new Lambda function version. Click on the “Versions” tab and then click the version number to see that the trigger is saved in the version.
You will then see the CloudFront trigger in the diagram and other saved details.
Test the redirect using the cURL command.
If you need to remove a Lambda function from a Cloudfront distribution,
go to the distribution
click “Behaviors”
choose a behavior and click “Edit”
Scroll down to “Function Association” and select “No association” for the function type
Click “Save changes”
Invalidate the Cloudfront distribution using /*
Put Redirect Data in an External JSON File
The instructions above work, but whenever you want to update the redirects, you have to edit the lambda JavaScript function and redeploy it to the Cloudfront edge. The deployment process takes about 5 minutes. To improve this process, we can move the redirect data to a JSON file in an S3 bucket. Then, you can just upload an updated JSON file, overwriting the existing file, and the updated redirects will work immediately. Here’s how to do that.
Create a JSON file containing all redirects like the following and upload it to S3.
Add permissions to the lambda function to have read access to S3. Go to Lambda > Functions > and click on the function name. Then, go to Configuration > Permissions > Execution Role > and click on the role name.
A new tab containing the role’s permission will open. Under “Permissions policies”, click on the policy name.
That will open a new tab showing the permissions defined in the policy. Click the Edit button.
A new table will open showing the existing permission. Add the following S3 permissions. Replace “mybucket” with the name of the S3 bucket where you put the JSON redirect file.
Click “Deploy” so you can test the lambda function. Once you verify it is working, go to Actions > Deploy to Lambda Edge. Follow the remaining steps as shown above.
Adding UTM parameters to links is useful for tracking marketing efforts, e.g. if you have a banner or an email with links to a landing page, you’ll want to know which method (banner or email) generated the most page visits and form fills. Google has a campaign URL builder that will generate URLs with UTMs for you. In Google Analytics, you can find pageviews to the landing page by UTM parameter. However, if you want to track any subsequent pages after the landing page, then you’ll need a way to pass the UTMs along to the subsequent pages. In my particular situation, I needed to pass UTMs to a 3rd-party site. The visitor flow would be like this
Click a banner on the home page of example.com. The banner has UTMs in the query string, e.g. example.com/landing-page?utmsource=home-page-banner
Land on an overview page on example.com, e.g. example.com/landing-page
Maybe visit other pages on example.com
Return to example.com/landing-page
Click a link to register for something on a 3rd-party site, e.g. foo.com/register
By default, only the first pageview of example.com/landing-page would include UTMs in the URL. To pass the UTMs to the link to the 3rd-party site, something extra needed to be done. I chose the following approach, which works well.
Write JavaScript code that runs on all pages.
If a URL contains UTM params, save the UTM name/value pairs as session cookies, overwriting any existing UTM cookies.
If a page has any <a> tags with the class “appendUTM”, then rewrite the href value by appending the UTM params.
I then added the class “appendUTM” to any links where I wanted to append the UTMs. In my case, it was the links to the 3rd-party registration site.
I recently had to move 35,000+ website images from Git to AWS S3. The images were in many subfolders. First, I had to separate the images from all other files. Then, when I tried dragging and dropping the parent folder containing all images to the AWS S3 web interface, I had to wait 9 to 17 hours.
When I woke up in the morning, I found the upload completed with errors:
Here’s how I easily separated the images from all other files and successfully uploaded all 35,000+ images.
Separate images from other files
First, I wanted to see a list of all unique file extensions so I could know what image file extensions were being used.
find . -type f | sed 's|.*.||' | sort -u
This returned a list like the one below.
JPG PNG ali bmp brs cnd CSS ...
Then, I copied the website root folder and made a new sibling folder called website-images where I’d just have the images.
Then, I deleted all images from the “website” folder using the following command.
As mentioned earlier, uploading 35000 images to S3 using the web interface took a long time and kept completing with errors. What ended up working was uploading the images using the AWS CLI. Here’s how I did it.
I had to create an access key to authenticate. I created a new Identity and Access Management (IAM) user and then clicked the “Create access key” button to generate a new key.
I then saved those key values as environment variables. Here are the instructions. I basically ran the following commands in the terminal, replacing the values with my actual values.
For the default region, I chose the region for my S3 bucket.
Upload (sync) files
I then uploaded (synced) files from my local to my remote S3 bucket. Here’s the documentation for the S3 sync command. Since I had already uploaded some files, I was hoping to find a flag to skip uploading files that exist at the destination. It turns out that the “sync” command does this by default. I ran the following command in dry-run mode to verify the output was correct.
Then, I reran the command without the dry-run flag.
aws s3 sync . s3://q-website-images/docs/
The command output a list of the files it uploaded.
When it was done, I tried rerunning the command only to find that it completed with no output, indicating that all source files already existed in the destination. That was a sign that the sync was complete. Looking at the number of files in the S3 web console, I could see the correct number of files listed there.
Now that the images are in S3, I’ll use S3 as the origin for an image CDN (ImageKit). ImageKit will auto-optimize the images.
Google Analytics version 4 (GA4) is quite different than the previous version, called Universal Analytics (UA). GA4 is event-based, and the UI is quite different. If you’ve got a link with UTM parameters like
In GA4, if you go to Reports > Engagement > Pages and screens, you will see stats like pageviews for many pages. You can then filter to just one page like a free trial page by entering the page’s path in the search field, e.g. “/free-trial/”. You can then add a secondary dimension for source and medium. What you’ll end up will be something like this
This may not include the source and medium in your UTM parameters. A better way to get the traffic report based on a specific source and medium or name is by going to Explorations.
Here, you can create a new exploration. In the left “Variables” column
give the exploration a name like “Feb 2023 Campaign”
add some dimensions like
Page path and screen class
Session campaign
Session source / medium
add some metrics like “Views” and “Sessions”
In the middle “Settings” column,
drag some or all dimensions from the left column to the “Rows” field
drag some or all metrics from the left column to the “Values” field
add some filters like
Session source / medium contains market
Session campaign contains “Feb 2023 Campaign”
You will then see the report on the right.
Here’s the mapping between UTM query parameter and UTM dimension in GA4.
To find the number of clicks on a link with a UTM, go to
Reports > Acquisition > Traffic acquisition
In the primary dimension, choose session source or session medium or session campaign
In the Search field, enter a value for the session source or session medium or session campaign
Choose a date range
Scroll to the right and under “Event count”, choose “click”.
I’m currently migrating a large website from Handlebars to Nunjucks. Since the website is being updated daily, and because there are too many pages, I can’t convert the Handlebars syntax to Nunjucks syntax manually. To solve this, I started writing a script to convert the syntax programmatically using JavaScript (nodeJS). So far, it’s working very well. Here’s how I’m doing it, and how you can do something similar when confronted with a migration project.
Basically, the way it works is
it recursively finds all files in a folder called “temp”
if the file path ends with “hbs” – indicating it is a Handlebars file – then for each file, it executes a series of regex search and replace commands, e.g.
replace {{#if class}} with {% if class %}
replace {{/if}} with {% endif %}
and so on.
Those are simple search-and-replace situations. There may be a situation where you’ll need an advanced search and replace, e.g. when replacing
{{> social-list
dark="true"
centered="true"}}
with
{% set dark="true" %}
{% set centered="true" %}
{% include social-list.njk %}
In this case, you can use a “replacer” function, which allows you to do much more to manipulate the output.
When you’re all done and you’ve built the HTML files from both the handlebars templates and the nunjucks templates, you can write a script that recursively reads all HTML files in the build output folder and lists each HTML file path generated from each handlebars and nunjucks template along with their respective file size. The file sizes should be the same or almost the same. If some are not, then the migration script didn’t convert those templates correctly. Maybe something like:
With so many people working both from home and at the office, it can become annoying to have to rearrange your application windows when you move between the two locations. This is especially true for people like me who need multiple monitors, two of which are 32″ 4K ones as shown below, which I need to display multiple windows on each screen.
Though I have a similar setup at home, my application windows always get jumbled up when I move between locations, possibly because the standalone monitors are not all the same brand with the same exact resolution.
Most window management apps allow you to move and resize windows in a grid, e.g.
left 50% of screen,
bottom 50% of screen,
right 33% of screen,
top 50%, left 50% of screen,
etc
These are fine if you aren’t going to move locations often and don’t have too many windows. If you want the same layout spanning multiple monitors and the ability to instantly move and resize all windows to that layout, then I recommend Moom. Here’s how to use Moom to save layouts for multiple monitor configurations.
At location 1, e.g. work, open your applications and arrange them how you like
Open Moom and create a custom preset with the following settings
Type: Arrange Windows
Name: I put “3 Monitors – Work”
Uncheck all checkboxes
Click “Update Snapshot”
This saves the layout as a preset. To test it, resize and move all your windows around. Then, hover over the green dot in any one window and click on the preset. All windows will instantly move to how you had them.
When you’re at home, you can create another preset and call it something like “3 Monitors – Home”. Now, you no longer have to mess around with moving windows around. Just click on a preset from any open window and get back to business.
Moom has a one-time cost of $10, but it’s obviously worth it.