I recently had to move 35,000+ website images from Git to AWS S3. The images were in many subfolders. First, I had to separate the images from all other files. Then, when I tried dragging and dropping the parent folder containing all images to the AWS S3 web interface, I had to wait 9 to 17 hours.
When I woke up in the morning, I found the upload completed with errors:
Here’s how I easily separated the images from all other files and successfully uploaded all 35,000+ images.
Separate images from other files
First, I wanted to see a list of all unique file extensions so I could know what image file extensions were being used.
find . -type f | sed 's|.*.||' | sort -u
This returned a list like the one below.
JPG PNG ali bmp brs cnd CSS ...
Then, I copied the website root folder and made a new sibling folder called website-images where I’d just have the images.
Then, I deleted all images from the “website” folder using the following command.
As mentioned earlier, uploading 35000 images to S3 using the web interface took a long time and kept completing with errors. What ended up working was uploading the images using the AWS CLI. Here’s how I did it.
I had to create an access key to authenticate. I created a new Identity and Access Management (IAM) user and then clicked the “Create access key” button to generate a new key.
I then saved those key values as environment variables. Here are the instructions. I basically ran the following commands in the terminal, replacing the values with my actual values.
For the default region, I chose the region for my S3 bucket.
Upload (sync) files
I then uploaded (synced) files from my local to my remote S3 bucket. Here’s the documentation for the S3 sync command. Since I had already uploaded some files, I was hoping to find a flag to skip uploading files that exist at the destination. It turns out that the “sync” command does this by default. I ran the following command in dry-run mode to verify the output was correct.
Then, I reran the command without the dry-run flag.
aws s3 sync . s3://q-website-images/docs/
The command output a list of the files it uploaded.
When it was done, I tried rerunning the command only to find that it completed with no output, indicating that all source files already existed in the destination. That was a sign that the sync was complete. Looking at the number of files in the S3 web console, I could see the correct number of files listed there.
Now that the images are in S3, I’ll use S3 as the origin for an image CDN (ImageKit). ImageKit will auto-optimize the images.
Google Analytics version 4 (GA4) is quite different than the previous version, called Universal Analytics (UA). GA4 is event-based, and the UI is quite different. If you’ve got a link with UTM parameters like
In GA4, if you go to Reports > Engagement > Pages and screens, you will see stats like pageviews for many pages. You can then filter to just one page like a free trial page by entering the page’s path in the search field, e.g. “/free-trial/”. You can then add a secondary dimension for source and medium. What you’ll end up will be something like this
This may not include the source and medium in your UTM parameters. A better way to get the traffic report based on a specific source and medium or name is by going to Explorations.
Here, you can create a new exploration. In the left “Variables” column
give the exploration a name like “Feb 2023 Campaign”
add some dimensions like
Page path and screen class
Session campaign
Session source / medium
add some metrics like “Views” and “Sessions”
In the middle “Settings” column,
drag some or all dimensions from the left column to the “Rows” field
drag some or all metrics from the left column to the “Values” field
add some filters like
Session source / medium contains market
Session campaign contains “Feb 2023 Campaign”
You will then see the report on the right.
Here’s the mapping between UTM query parameter and UTM dimension in GA4.
To find the number of clicks on a link with a UTM, go to
Reports > Acquisition > Traffic acquisition
In the primary dimension, choose session source or session medium or session campaign
In the Search field, enter a value for the session source or session medium or session campaign
Choose a date range
Scroll to the right and under “Event count”, choose “click”.
I’m currently migrating a large website from Handlebars to Nunjucks. Since the website is being updated daily, and because there are too many pages, I can’t convert the Handlebars syntax to Nunjucks syntax manually. To solve this, I started writing a script to convert the syntax programmatically using JavaScript (nodeJS). So far, it’s working very well. Here’s how I’m doing it, and how you can do something similar when confronted with a migration project.
Basically, the way it works is
it recursively finds all files in a folder called “temp”
if the file path ends with “hbs” – indicating it is a Handlebars file – then for each file, it executes a series of regex search and replace commands, e.g.
replace {{#if class}} with {% if class %}
replace {{/if}} with {% endif %}
and so on.
Those are simple search-and-replace situations. There may be a situation where you’ll need an advanced search and replace, e.g. when replacing
{{> social-list
dark="true"
centered="true"}}
with
{% set dark="true" %}
{% set centered="true" %}
{% include social-list.njk %}
In this case, you can use a “replacer” function, which allows you to do much more to manipulate the output.
When you’re all done and you’ve built the HTML files from both the handlebars templates and the nunjucks templates, you can write a script that recursively reads all HTML files in the build output folder and lists each HTML file path generated from each handlebars and nunjucks template along with their respective file size. The file sizes should be the same or almost the same. If some are not, then the migration script didn’t convert those templates correctly. Maybe something like:
With so many people working both from home and at the office, it can become annoying to have to rearrange your application windows when you move between the two locations. This is especially true for people like me who need multiple monitors, two of which are 32″ 4K ones as shown below, which I need to display multiple windows on each screen.
Though I have a similar setup at home, my application windows always get jumbled up when I move between locations, possibly because the standalone monitors are not all the same brand with the same exact resolution.
Most window management apps allow you to move and resize windows in a grid, e.g.
left 50% of screen,
bottom 50% of screen,
right 33% of screen,
top 50%, left 50% of screen,
etc
These are fine if you aren’t going to move locations often and don’t have too many windows. If you want the same layout spanning multiple monitors and the ability to instantly move and resize all windows to that layout, then I recommend Moom. Here’s how to use Moom to save layouts for multiple monitor configurations.
At location 1, e.g. work, open your applications and arrange them how you like
Open Moom and create a custom preset with the following settings
Type: Arrange Windows
Name: I put “3 Monitors – Work”
Uncheck all checkboxes
Click “Update Snapshot”
This saves the layout as a preset. To test it, resize and move all your windows around. Then, hover over the green dot in any one window and click on the preset. All windows will instantly move to how you had them.
When you’re at home, you can create another preset and call it something like “3 Monitors – Home”. Now, you no longer have to mess around with moving windows around. Just click on a preset from any open window and get back to business.
Moom has a one-time cost of $10, but it’s obviously worth it.
In this tutorial, I will explain how we can fetch remote paginated JSON data synchronously (in serial) and asynchronously (in parallel).
Data
You can get test data to fetch from RapidAPI, but I’m going to fetch video data from Vimeo using the Vimeo API.
Fetch Method
There are many ways you can fetch remote data. The RapidAPI website provides code snippets for various languages and fetch methods. For example, for Node.js, there’s HTTP, Request, Unirest, Axios, and Fetch.
Some services like Vimeo provide libraries and SDKs in a few languages like PHP, Python and Node.js. You can use those as well if you’d like.
I’m actually going to use the Got library [GitHub], which is a very popular library.
CommonJS vs ESM
Many of the latest Node packages are now native ESM instead of CommonJS. Therefore, you can’t require modules like this
const got = require('got');
Instead, you must import modules like this
import got from 'got';
According to this page, you can convert your project to ESM or use an older version of the got package that uses CommonJS.
If using ESM, you need to put "type": "module" in your package.json.
Authentication
Many services like Vimeo require authentication in order to use their API. This often involves creating an access token and passing it in the header of the API call like this
GET /tutorial HTTP/1.1
Host: api.vimeo.com
Authorization: bearer {access_token}
Setup
Let’s set up our project. Do the following:
Create a new folder, e.g. test
Open the folder in a code editor (I’m using VisualStudio Code)
Open a terminal (I’m doing it in VS Code)
Initialize a Node project by running npm init -y
This will generate a package.json file in the folder.
Since we’re using ESM and will import modules rather than require them, add the following to the package.json file.
"type": "module"
Call the Vimeo API
Let’s start by calling the Vimeo API just once. Create a new file called get-data-one.js and copy the following contents into it. Replace {user_id} with your Vimeo user ID and {access_token} with your Vimeo access token.
import got from 'got';
let page = 1;
let per_page = 3;
let fields = "privacy,link,release_time,tags,name,description,download";
const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;
const options = {
method: 'GET',
headers: {
'Authorization': 'bearer {access_token}'
}
};
let data = await got(url, options).json();
console.log(data);
We’re importing the got library. For this to work, we need to install the got package. Run the following command.
npm install got
This will download the got package and its dependencies into the node_modules folder.
In the code, the Vimeo endpoint we’re calling is /users/{user_id}/videos, which returns all videos that a user has uploaded. According to the API docs, we can
Specify the page number of the results to show using page
Specify the number of items to show on each page of results, up to a maximum of 100, using per_page
Specify which fields to return using fields
These parameters can be added to the endpoint URL in the query string, which is what we’ve done. However, for this test, we’ll just call one page and return the records (videos). We then call the API using the got library and then dump the results to the console. Let’s run the script and check the output. Run the following command.
node get-data-one.js
As expected, here’s the output.
The output starts with pagination info and the total number of available records (videos) followed by the actual data in the form of an array of video objects. In this case, we see 3 objects because we set per_page to 3.
Let’s update our code to write the output to a file. That will make it easier to read when there’s a lot of data. Add the following code snippets
import fs from "fs";
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
stream.once('open', function(fd) {
stream.write(JSON.stringify(data)+"\n");
stream.end();
});
so the code looks like this:
import fs from "fs";
import got from 'got';
let page = 1;
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download";
const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;
const options = {
method: 'GET',
headers: {
'Authorization': 'bearer {access_token}'
}
};
let data = await got(url, options).json();
console.log(data);
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
stream.once('open', function(fd) {
stream.write(JSON.stringify(data)+"\n");
stream.end();
});
We don’t need to install the fs package because that’s included in Node by default. The stream will write data to a file we’ll call video-data.json and we pass it the “w” flag to overwrite any existing contents of the file.
When we rerun the script, we see the file is created. We can format (prettify) it so it’s easy to read.
Call the Vimeo API Multiple Times in Serial with Pagination
Now, let’s say we want to fetch more data, but the API limits how many records are returned in a single call. In this case, we need to call the API in a loop passing a different page number. Let’s create a new file called get-data-serial.js with the following code.
import fs from "fs";
import got from 'got';
let data = [];
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download";
const options = {
method: 'GET',
headers: {
'Authorization': 'bearer {access_token}'
}
}
for(let page = 1; page <= 3; page++) {
const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;
let somedata = await got(url, options).json();
data.push(somedata);
console.log(page);
};
console.log(data);
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
stream.once('open', function(fd) {
stream.write(JSON.stringify(data)+"\n");
stream.end();
});
Here, I’m using a simple for loop to loop through 3 pages. I also created a data variable as an empty array. With each loop iteration, I push the page’s returned data to the data array. When all is done, I write the data array to a file, which looks like this.
I collapsed the “data” array so we can see that 3 pages of data were returned. We ran this in serial so the order of the output is page 1, page 2, and page 3.
Call the Vimeo API Multiple Times in Parallel with Pagination
Now, let’s do the same thing, but asynchronously (in parallel). Create a new file called get-data-parallel.js with the following code.
import fs from "fs";
import got from 'got';
const options = {
method: 'GET',
headers: {
'Authorization': 'bearer {access_token}'
}
};
let data = [];
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download";
let pages = [1,2,3];
await Promise.all(pages.map(async (page) => {
const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=2&fields=privacy,link,release_time,tags,name,description,download`;
let somedata = await got(url, options).json();
data.push(somedata);
console.log(page);
}));
console.log(data);
var stream = fs.createWriteStream("video-data-parallel.json",{flags:'w'});
stream.once('open', function(fd) {
stream.write(JSON.stringify(data)+"\n");
stream.end();
});
In this case, instead of a for loop, we’re using Promise.all and passing to it an array of page numbers that we loop over using the map function. When we run the script, we get output like the following:
You’ll notice 2 things:
the script runs faster because the API calls are done simultaneously in parallel (asynchronously) rather than one after the other in serial (synchronously).
the order of the output is no longer consecutive by page number. In this example, it was page 1, page 3, page 2.
Modifying the JSON Output Structure
As shown in the previous screenshot, the API call returns an object containing pagination info followed by a data array – an array of objects containing video info.
What if we just want the data objects and not the pagination info. We can do that by modifying the structure of the JSON output. We can replace
data.push(somedata);
with
data.push(somedata.data);
but then the output becomes an array of arrays.
To fix this, let’s flatten the array by adding the following code:
data = data.flat(1);
right before we console it out and write to file.
Now, the output file looks like this (each record is collapsed for visibility).
Filtering Out Certain Records
What if we want to filter out certain records, e.g. we want to filter out all videos that are not public, i.e. we only want videos where privacy.view = “anybody”. We can use the filter function to do that, like this:
Each video record can contain a lot of information, including information we don’t need. For example, the privacy object contains 5 keys.
If we want to return just one privacy key, say “view”, then we can do so using the map function as follows:
// simplify privacy object to just privacy.view
somedata = somedata.map(function (video) {
video.privacy = video.privacy.view;
return video;
});
For each video record, the “download” field is an array of objects, one for each available rendition (resolution), e.g.
If we only want to, say, return “hd” videos and only the download links, we can use two map functions like this:
// only include videos that are HD and only return HD video download links
somedata = somedata.map(function (video) {
let download = [];
video.download.map(function (size) {
if (size.quality === "hd") {
download.push({
rendition: size.rendition,
link: size.link
})
}
});
if (download.length !== 0) {
video.download = download;
return video;
}
});
Now, the downloads array is simplified, like this:
The “categories” field is an array of objects with a lot of data, including objects and arrays of objects.
What if we want to simplify that to just a comma-delimited list of category names. We can do that like this:
For reference, here’s the complete code for get-data-serial.js. The page limit and per_page values can be updated depending on how many results you want.
import fs from "fs";
import got from 'got';
let data = [];
let per_page = 2;
let fields = "privacy,link,release_time,tags,name,description,download,categories";
const options = {
method: 'GET',
headers: {
'Authorization': 'bearer {access_token}'
}
}
for(let page = 1; page <= 3; page++) {
const url = `https://api.vimeo.com/users/{user_id}/videos?page=${page}&per_page=${per_page}&fields=${fields}`;
let somedata = await got(url, options).json();
somedata = somedata.data;
// only include videos that are public
somedata = somedata.filter(video => video.privacy.view === "anybody" );
// only include videos that aren't in the "Educational" category
somedata = somedata.filter(function (video, index, arr) {
let isEducational = false;
video.categories.filter(function (category, index, arr) {
if (category.name === "Educational") {
isEducational = true;
}
});
if (isEducational === false) {
return video;
}
});
// simplify privacy object to just privacy.view
somedata = somedata.map(function (video) {
video.privacy = video.privacy.view;
return video;
});
// only include videos that are HD and only return HD video download links
somedata = somedata.map(function (video) {
let download = [];
video.download.map(function (size) {
if (size.quality === "hd") {
download.push({
rendition: size.rendition,
link: size.link
})
}
});
if (download.length !== 0) {
video.download = download;
return video;
}
});
// simplify categories array of objects to just an array of category names
somedata = somedata.map(function (video) {
let categories = [];
if (video !== undefined) {
video.categories.map(function (category) {
categories.push(category.name);
});
video.categories = categories;
return video;
}
});
data.push(somedata);
console.log(page);
};
data = data.flat(1);
console.log(data);
var stream = fs.createWriteStream("video-data.json",{flags:'w'});
stream.once('open', function(fd) {
stream.write(JSON.stringify(data)+"\n");
stream.end();
});
Most websites contain the same or very similar layouts on multiple pages, e.g. header and footer. There also might be a few different hero section designs and a few different CTA section designs. Imagine having 10 product pages each containing three 2-column sections with a text description in the left column and a screenshot in the right column. Each of these product pages may also have a CTA section design but with slightly different text and links. It’s common to put shared CSS in a shared CSS file, e.g. shared.css, common.css, or global.css. This especially makes sense for the header and footer, which are usually shown on all pages. But over time, that shared CSS file can become very long because you may have a lot of CSS for many different common sections. This can make it difficult and dangerous to edit code for just one particular section. It can also make it very difficult if you want to copy a section on one page to add to another page. If the HTML, CSS, and JS for the section aren’t isolated, you may not copy all the necessary code, not to mention you could end up with inconsistencies between two or more sections that should have the same design.
Consolidating all CSS into the fewest files possible is good for website performance (the fewer files, the fewer network requests), but nowadays, it’s common for websites, including simple static websites, to go through an automated build process to optimize the files before publishing them. The build process can do many things like minify and combine multiple CSS and JS files into single CSS and JS files, add prefixes to CSS using tools like PostCSS auto-prefixer, etc.
Following is one simple approach to grouping HTML, CSS and JS by website section. This approach can also be used for any part of a website like blocks within a section, but to keep things simple, we’ll just look at sections which I define as horizontal rows of related content, e.g.
In the src (source) folder, I’m using Nunjucks (njk) files instead of HTML files so they can include logic and pull in the components (partials). When the source files are processed, the built files show up in the “build” folder. For the home page source file (index.njk), the structure of the code could be like this
<html>
<head>
{% include "/src/components/header/header.css" %}
{% include "/src/components/footer/footer.css" %}
{% include "/src/index.css" %}
</head>
<body>
{% include "/src/components/header/header.njk" %}
... some HTML ...
{% include "/src/components/footer/footer.njk" %}
{% include "/src/components/header/header.js" %}
{% include "/src/components/footer/footer.js" %}
{% include "/src/index.js" %}
</body>
</html>
Note that the home page has its own CSS and JS files for elements that are not part of a component. When this file is built, the CSS and JS files will be combined (Netlify can do this automatically) and the included header and footer njk references will be replaced with their contents, e.g.
Here’s another example. For product page 1 (product1/index.njk), the file contents may look like this
<html>
<head>
{% include "/src/components/header/header.css" %}
{% include "/src/components/section1/section1.css" %}
{% include "/src/components/section4/section4.css" %}
{% include "/src/components/header/footer.css" %}
{% include "/src/product2/index.css" %}
</head>
<body>
{% include "/src/components/header/header.njk" %}
{% set title = "Product 1" %}
{% set heroImage = "product1.jpg" %}
{% include "/src/components/section1/section1.njk" %}
... some HTML ...
{% set text = "Try Product 1 Now" %}
{% set link = "/product1/free-trial/" %}
{% include "/src/components/section4/section4.njk" %}
{% include "/src/components/footer/footer.njk" %}
{% include "/src/components/header/header.js" %}
{% include "/src/components/section1/section1.js" %}
{% include "/src/components/section4/section4.js" %}
{% include "/src/components/footer/footer.js" %}
{% include "/src/product2/index.js" %}
</body>
</html>
In the code example above, we’re passing some variables into components section1 and section 4. That allows us to reuse a component’s layout and design while changing its content. Since product pages usually look very similar, the code for product2/index.njk might look like this
<html>
<head>
{% include "/src/components/header/header.css" %}
{% include "/src/components/section1/section1.css" %}
{% include "/src/components/section4/section4.css" %}
{% include "/src/components/header/footer.css" %}
{% include "/src/product2/index.css" %}
</head>
<body>
{% include "/src/components/header/header.njk" %}
{% set title = "Product 2" %}
{% set heroImage = "product2.jpg" %}
{% include "/src/components/section1/section1.njk" %}
... some HTML ...
{% set text = "Try Product 2 Now" %}
{% set link = "/product2/free-trial/" %}
{% include "/src/components/section4/section4.njk" %}
{% include "/src/components/footer/footer.njk" %}
{% include "/src/components/header/header.js" %}
{% include "/src/components/section1/section1.js" %}
{% include "/src/components/section4/section4.js" %}
{% include "/src/components/footer/footer.js" %}
{% include "/src/product2/index.js" %}
</body>
</html>
I reused the components but changed the value of the variables that are referenced in the components.
To prevent code conflicts, you can specify an ID in the first element of each component. For example,
section1.njk
<div id="section1">
... some HTML ...
</div>
section2.njk
<div id="section2">
... some HTML ...
</div>
Then, in the component’s CSS, to prevent CSS conflicts, you can prefix all rules like this
section1.css
#section1 .intro {
... some CSS ...
}
#section1 .features {
... some CSS ...
}
section2.css
#section2 .intro {
... some CSS ...
}
#section2 .features {
... some CSS ...
}
Similarly, with the JavaScript component file, you can do something similar, e.g.
section1.js
$("#section1 .intro")...
section2.js
$("#section2 .intro")...
Another benefit of this approach is you can create a page showing a preview of all components you have. When you want to create a new page, you can browse the list of component previews to see if you can reuse an existing component or decide if you need to create a new component.
Cherry-picking individual commits to merge them from one git branch to another is a very common practice. For me, I usually cherry-pick one or more commits from a develop branch to a master / main branch. The problem, however, is when I need to cherry-pick multiple commits made over a long period of time, like one month. Most git clients don’t make it easy to filter commits by keyword, and if they do, then they don’t let you easily select all filtered commits. So, back to the command line it is.
This post will describe one way to easily find all commits matching one or more keywords and extract their commit IDs on one line so you can quickly run the git cherry-pick command.
Step 1: Find and review filtered commits
We usually prefix our commit messages with an identifier so we can filter them amongst many other commits. For example, all commits to related to one project may have a commit message prefix of “LP:8929” where, in this case, “LP” stands for Landing Page. Sometimes, we might accidentally use an equal sign (=) instead of a colon (:). So, we’ll have one or more keywords to search commit messages when we filter all commits.
Furthermore, we usually cherry-pick commits from a develop branch to the master branch while we’re on master. The following command does the following:
Command
Description
git log
get the git log
master..develop
as we’re on the master branch, get the log from the develop branch
--oneline
format the output in one line (short format)
--grep='LP:8929' --grep='LP=8929‘
filter results to only results containing either ‘LP:8929’ or ‘LP=8929’
-i
do a case-insensitive search
git log master..develop --oneline --grep='LP:8929' --grep='LP=8929' -i
48b1cbcef51 LP:8929 - Fixed images path by using imagekit variable
8efa19678b3 LP:8929 - Fixed text alignment
3ab0954c725 LP:8929 - Updated banner and VS page on the landing page
bba58a1c5eb LP:8929 - Updated main heading of features section
fb56da3b134 LP:8929 - Updated content and created carousel as per the requirement
c7c6d50fbdb LP:8929 - Updated styling of the page
bc8a7fc38a1 LP:8929 - Updated icons of the comparison section
9a90ccb91ad LP:8929 - Updated text styling
a3bc0baf65f LP:8929 - Updated headshots and styling of the page
13977d2eb12 LP:8929 - Updated content as per the doc
7e589debba8 LP:8929 - First Pass: Created Landing page of Tenable v Qualys competitive page
Now that we have our list of commits, we need to review them to make sure they are correct. In our case, we primarily want to make sure the keywords are at the beginning of the commit message since that’s our commit message format.
Step 2: Extract commit IDs
If our list of commits is correct, then we need to extract the commit IDs. However, the git cherry-pick command requires the list of commit IDs to be in chronological order. By default, the git log output is in reverse chronological order. Here’s the updated command with some modifications.
Command
Description
--reverse
Reverse the git log output to chronological order
awk '{print $1}'
Extract only the text output in the first column (commit IDs)
Now that we have our list of commit IDs in the correct order, we can copy it, type “git cherry-pick” and paste the list to create a command like the following.
Svelte is a JavaScript framework similar to React, Vue, etc, but with some fundamental differences that I think make it better. It’s also much more intuitive to understand as the web component structure of the code is dead simple. When building an app using Svelte, you’re encouraged to use Sveltekit, the official Svelte application framework. But, if you just want a simple frontend-only site/app, then you can just use Svelte (without Kit). In doing so, you can then easily take the built CSS and JS files and include them in any existing web page. Then, you just add a simple div
<div id="app"></div>
in your existing HTML file where you want the app go to.
Here’s an example.
Create a Vite Project
Vite is a front-end build tool and dev server. It includes Rollup to bundle and optimize assets. Run npm init vite and follow the prompts. As you can see in the screenshot below, I chose the default project name of “vite-project”.
This will create a folder named after your project and create some files and folders within it.
Then, as the instructions say, run the commands
cd vite-project npm install npm run dev
npm install will download the node packages specified in the package.json file into the node_modules folder.
Start Dev Server
npm run dev will launch a dev server. Running the dev script starts a program called Vite. Vite’s job is to take your application’s source files, pass them to other programs (including Svelte, in our case) and convert them into the code that will actually run when you open the application in a browser.
If you go to the local URL http://127.0.0.1:5173/, you will see the starter Svelte app that was created automatically.
The app includes a simple interactive counter. When you click the button, the counter increments.
src Folder
If you look at the src folder, you’ll see a few files:
The assets folder contains static assets, like images
The lib folder contains Svelte components. In this case, there’s the counter component.
app.css is a global CSS file for the entire app
App.svelte is your main app code. In this case, it imports the svelte.svg logo and the Counter component.
<script>
import svelteLogo from './assets/svelte.svg'
import Counter from './lib/Counter.svelte'
</script>
main.js is the main JavaScript file. It loads your app’s CSS (app.css) and your app’s main code (App.svelte). It will also load the app in the HTML element with ID app.
import './app.css'
import App from './App.svelte'
const app = new App({
target: document.getElementById('app'),
})
export default app
Build the App for Production
When you’re done developing your app (creating components, etc), you can run a build. This will bundle all CSS and JS files into a single CSS and JS file, optimize the bundle (e.g. minify it), and output the production files in the dist (distribution) folder. Run a build with the following command.
npm run build
You’ll see a dist folder created. In this case, with the following files:
If you open index.html, you’ll see it references the two bundled CSS and JS files along with a div element with ID app where the app will be injected.
There are many different types of website content management systems. This post will explain the different types and include some examples.
No-Code CMS
A no-code CMS is one where the entire website is managed by the CMS. For example, with webflow, all changes to your website must be done in the CMS using a WYSIWYG. You can only edit the HTML of the code widget. All other changes must be done using the UI. If you want to edit the source code of the entire site, you must export the entire site. But, you can’t import any changes you made from the export.
A database-driven CMS is one that stores some or all data in a database. For example, WordPress is one such CMS. Web pages and blog post content are stored in a database (usually mySQL). Unlike webflow, however, you can edit the source code of the web page templates. WordPress has a built-in versioning system, so you can see a history of changes made to a page.
A headless CMS is one that stores data remotely. Data is integrated with your website using an API. For example, Contentful is one such CMS. You can create custom content models in Contentful. Your website will need to access the data at a particular URL that returns JSON data. Your website will then need to consume that JSON data to process it to show it on a web page, e.g. by using a template engine. Data in Contentful is likely stored in a database.
A git-based CMS stores data in git version control. Unlike many of the other CMSs, this CMS connects to your git repository as a web developer does. It then shows an interface that allows content editors to make edits to files in the repo. CloudCannon is one such CMS. With CloudCannon, web developers can add the class “editable” to HTML tags with a page’s source code. Then, the content of those tags will be the only elements that content editors can edit.
Websites that are built using a static site generator (SSG) often include front matter. Frontmatter can come in different formats. One popular format is YAML. You can have YAML at the beginning of a page’s source code. Or, you can have a separate, standalone .yaml file. YAML contains contain name-value pairs, so they basically represent data. Netlify CMS is one type of CMS that supports editing YAML. It also supports editing pages like blog posts. However, unlike WordPress, where editors can add blocks of content to a page’s body, Netlify CMS requires the blocks of content to be predetermined upfront. So, for a blog post using Netlify CMS, editors can only use one widget to entire the entire body of the page. That widget is a rich text or markdown editor.
The CMSs above can be categorized as CMSs that support structured and unstructured data. CloudCannon supports unstructured data because you can let editors edit specific elements of a complex page by marking those elements as “editable”. YAML and database-backed CMSs support structured data because there is a clear name/value mapping.
My Preference
Since WordPress is the most popular CMS and it allows editors to add blocks of different types of content to the body of a page, I think it’s worth using. However, I don’t like that WordPress keeps data in a database and relies on PHP to build pages before serving them. Also, not all pages need to be edited by non-technical people. In fact, most pages will still be edited by developers who need full access to the source code. To accommodate the needs of both developers and non-developers, I would use GitHub to version all code changes and I would use a static site generator like Eleventy that is integrated with WordPress using the WordPress API. When a WordPress editor saves or publishes a page, WordPress can trigger a save_post() action with a callback function that calls a URL, e.g. a PHP script on Heroku at myapi.com/?postid=123.
function call_commit_script( $post_id ) {
$remote_url = 'https://www.myapi.com/?postid='.$post_id;
$access_token = '3&^&2lhl3@#lsjk756'; //some secret password
$args = array(
'headers' => array(
'Authorization' => 'Bearer ' . $access_token,
),
);
$response = wp_remote_get( $remote_url, $args );
if ( is_array( $response ) && ! is_wp_error( $response ) ) {
$headers = $response['headers']; // array of http header lines
$body = $response['body']; // use the content
}
$post_title = get_the_title( $post_id );
$post_url = get_permalink( $post_id );
$subject = 'A post has been updated';
$message = "A post/page has been updated:\n\n";
$message .= $post_title . ": " . $post_url\n\n$body;
// Send email to admin.
wp_mail( 'webmaster@qualys.com.com', $subject, $message );
}
add_action( 'save_post', 'call_commit_script' );
The PHP script on Heroku could then
verify the Authorization Bearer header contains the predetermined access token value, e.g. 3&^&2lhl3@#lsjk756. If it doesn’t, then the request didn’t come from an authorized origin and should be rejected.
pull any latest changes from GitHub
call the WordPress API to get the page content in JSON format, e.g. https://www.mywordpresssite.com/wp-json/wp/v2/pages/123
commit the JSON data to GitHub
use the static site generator to build the page using the JSON data as the data source
Note: the PHP script on Heroku, e.g. https://www.myapi.com/?postid=’.$post_id, must be secured by SSL/TLS to encrypt the URL and headers so that attackers can’t see the authorization header value.
For allowing non-technical people to
edit simple YAML files or frontmatter, I’d use Netlify CMS.
edit complex, structured data, I’d use Contentful.
edit specific, unstructured content, I’d use CloudCannon.