The goal of this article is to show a real application I made recently using Go, Vue.js and Docker which is in production today. Tutorials are sometimes disappointing because they do not talk about real life situations so I tried to put things differently here. I won’t comment the whole code as it would take ages but explain the overall project structure, which important choices I made, and why. I’ll also try to emphasize interesting parts of code worth commenting.
The code of the whole app is here on my GitHub, maybe you should open it in parallel of this article.
Purpose of the Application
This application is dedicated to presenting data from various databases in a user friendly way. The main features are the following:
- user has to enter credentials in order to use the Single Page Application (SPA) frontend
- user can select various interfaces in a left panel in order to retrieve data from various db tables
- user can decide either to only count results returned by db or get the full results
- if results returned by db are lightweight enough then they are returned by the API and displayed within the SPA app inside a nice data table. User can also decide to export it as CSV.
- if results are too heavyweight, then results are sent asynchronously to the user by email within a .zip archive
- as input criteria, the user can enter text or CSV files listing a big amount of criteria
- some user inputs are select lists whose values are loaded dynamically from db
This project is made up of 2 Docker containers:
- a container for a backend API written in Go. No need of an HTTP server here since Go already has a very efficient built-in HTTP server (net/http). This application exposes a RESTful API in order to get requests from frontend and return results retrieved from several databases.
- a container for a frontend interface using a Vue.js SPA. Here an Nginx server is needed in order to serve static files.
Here is the Dockerfile of my Go application:
FROM golang
VOLUME /var/log/backend
COPY src /go/src
RUN go install go_project
CMD /go/bin/go_project
EXPOSE 8000
Dead simple as you can see. I’m using a pre-built Docker Golang image based on Debian.
My frontend Dockerfile is slightly bigger because I need to install Nginx, but still very simple:
FROM ubuntu:xenial
RUN apt-get update && apt-get install -y \
nginx \
&& rm -rf /var/lib/apt/lists/*
COPY site.conf /etc/nginx/sites-available
RUN ln -s /etc/nginx/sites-available/site.conf /etc/nginx/sites-enabled
COPY .htpasswd /etc/nginx
COPY startup.sh /home/
RUN chmod 777 /home/startup.sh
CMD ["bash","/home/startup.sh"]
EXPOSE 9000
COPY vue_project/dist /home/html/
The startup.sh
simply starts the Nginx server. Here is my Nginx configuration (site.conf
):
server {
listen 9000;
server_name api.example.com;
# In order to avoid favicon errors on some navigators like IE
# which would pollute Nginx logs (do use the "=")
location = /favicon.ico { access_log off; log_not_found off; }
# Static folder that Nginx must serve
location / {
root /home/html;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
# robots.txt file generated on the fly
location /robots.txt {
return 200 "User-agent: *\nDisallow: /";
}
}
As you can see, authentication is needed in order to use the frontend app. I implemented this within a .htpasswd
file.
Actually using Docker for the Go application is not really a big advantage since Go needs no external dependency once compiled making deployment very easy. Sometimes shipping a Go app inside Docker can be useful if you have external files needed in addition to your Go binary (like HTML templates or config files for example). This is not the case here but I still used Docker for consistency reasons: all my services are deployed through Docker so I do not want to have special cases to deal with.
The Go application is made up of multiple files. This is just for readability reasons and everything could have been put into one single file. You must keep in mind that when splitting the application like this, you need to export things (variables, structs, functions, …) you want to use across multiple files (using a capitalized first letter). During development you also need to use go run
with a wildcard like this:
go run src/go_project/*.go
I’m using a couple of Go external libraries (so few thanks to the already very comprehensive Go’s standard library!):
- gorilla/mux for the routing of REST API requests, especially for endpoints expecting positional arguments
- rs/cors for easier handling of CORS (which can be a nightmare)
- gopkg.in/gomail.v2 for email handling, especially for easy addition of attachments
Structure and tooling are much more complex regarding the frontend part. Here is an article dedicated to this. Actually this complexity only affects the development part because in the end, once everything is compiled, you only get regular HTML/CSS/JS files that you simply copy paste into your Docker container.
Dev vs Prod
Configuration is different in development and production. During development I’m working on a locally replicated database, I’m logging errors to console instead of file, I’m using local servers, … How to manage this seamlessly?
In the Vue.js app I need to either connect to a local development API (127.0.0.1) or a production API (api.example.com). So I created a dedicated http-constants.js
which returns either a local address or a production address depending on whether we launched the npm run dev
command or npm run build
command. See this article for more details.
In the Go app, multiple parameters change depending on whether I’m in development mode or production mode. In order to manage this, I’m using environment variables passed to the Go app by Docker. Setting configuration through environment variables is supposed to be best practice according to the 12 factor app. First we need to set environment variables during container creation thanks to the -e
option:
docker run --net my_network \
--ip 172.50.0.10 \
-p 8000:8000 \
-e "CORS_ALLOWED_ORIGIN=http://api.example.com:9000" \
-e "REMOTE_DB_HOST=10.10.10.10" \
-e "LOCAL_DB_HOST=172.50.0.1" \
-e "LOG_FILE_PATH=/var/log/backend/errors.log" \
-e "USER_EMAIL=me@example.com" \
-v /var/log/backend:/var/log/backend \
-d --name backend_v1_container myaccount/myrepo:backend_v1
Then those variables are retrieved within the Go program thanks to the os.getenv()
function. Here is how I managed it in main.go
:
// Initialize db parameters
var localHost string = getLocalHost()
var remoteHost string = getRemoteHost()
const (
// Local DB:
localPort = 5432
localUser = "my_local_user"
localPassword = "my_local_pass"
localDbname = "my_local_db"
// Remote DB:
remotePort = 5432
remoteUser = "my_remote_user"
remotePassword = "my_remote_pass"
remoteDbname = "my_remote_db"
)
// getLogFilePath gets log file path from env var set by Docker run
func getLogFilePath() string {
envContent := os.Getenv("LOG_FILE_PATH")
return envContent
}
// getLocalHost gets local db host from env var set by Docker run.
// If no env var set, set it to localhost.
func getLocalHost() string {
envContent := os.Getenv("LOCAL_DB_HOST")
if envContent == "" {
envContent = "127.0.0.1"
}
return envContent
}
// getRemoteHost gets remote db host from env var set by Docker run.
// If no env var set, set it to localhost.
func getRemoteHost() string {
envContent := os.Getenv("REMOTE_DB_HOST")
if envContent == "" {
envContent = "127.0.0.1"
}
return envContent
}
// getRemoteHost gets remote db host from env var set by Docker run.
// If no env var set, set it to localhost.
func getCorsAllowedOrigin() string {
envContent := os.Getenv("CORS_ALLOWED_ORIGIN")
if envContent == "" {
envContent = "http://localhost:8080"
}
return envContent
}
// getUserEmail gets user email of the person who will receive the results
// from env var set by Docker run.
// If no env var set, set it to admin.
func getUserEmail() string {
envContent := os.Getenv("USER_EMAIL")
if envContent == "" {
envContent = "admin@example.com"
}
return envContent
}
As you can see, if the production env variable is not set, we set a default value for local development. Then we can use those dedicated functions anywhere in the program. For example, here is how I’m handling the logging feature (log to console in development mode, and log to file in production):
log.SetFlags(log.LstdFlags | log.Lshortfile) // add line number to logger
if logFilePath := getLogFilePath(); logFilePath != "" { // write to log file only if logFilePath is set
f, err := os.OpenFile(logFilePath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
log.Fatal(err)
}
defer f.Close()
log.SetOutput(f)
}
Note that logging also involves the use of a shared volume. Indeed I want my log files to be accessed from the Docker host easily. That’s why I added -v /var/log/backend:/var/log/backend
to the docker run
command above and put a specific VOLUME
directive in the Dockerfile.
Design of the Frontend App with Vuetify.js
I have never been fond of spending days working on design, especially for small apps like this one. That’s why I’m using Vuetify.js which is a great framework to be used on top of Vue.js providing you with ready to use beautiful components. Vuetify uses Google’s material design, which looks very good to me.
Memory Usage
I’ve faced quite a lot of memory issues while building this app due to the fact that some SQL queries can possibly return a huge amount of data.
In the Go Backend
Rows returned from db are put in an array of structs. When millions of rows are returned, manipulating this array becomes very costly in terms of memory. The solution is to put as much logic as you can in your SQL request instead of your Go program. PostgreSQL is excellent at optimizing performances and in my case databases are running on PostgreSQL 10 which increased performance considerably thanks to parallel computing on some operations. Plus my databases have dedicated resources so I should use it as much as possible.
Regarding the CSV generation, you also need to consider whether you should store the CSV in memory or write it to disk. Personally I’m writing it to disk in order to reduce memory usage.
Still, I also had to increase the RAM of my server.
In the Vue.js Frontend
Clearly, a browser cannot handle too much content. If too many rows are to be displayed in the browser, rendering will fail. First solution is what I did: above a certain amount of rows returned by db, send results by email in a .zip archive. Another solution could be that results in browser are paginated and each new page actually triggers a new request to server (behind the hood, you would need to use LIMIT
in your SQL request).
Touchy Parts of Code
Here some special parts of code that are worth commenting in my opinion because they can be pretty original or tricky.
Multiple Asynchronous Calls with Axios
My frontend contains multiple HTML selects and I want the values of these lists to be loaded dynamically from the API. For this I need to use axios.all()
and axios.spread()
in order to make multiple parallel API calls with Axios. Axios’ documentation is not that good in my opinion. It is important to understand that you have 2 choices here:
- catching error for each request in
axios.all
: HTTP.get('/get-countries-list').catch(...)
- catching error globally after
axios.spread
: .then(axios.spread(...)).catch(...)
The first option allows you to display precise error messages depending on which request raised an error, but this is non blocking so we still enter axios.spread()
despite the error and some of the parameters will be undefined in axios.spread()
so you need to handle it. In the second option, a global error is raised as soon as one of the requests fails at least, and we do not enter axios.spread()
.
I chose the 2nd option: if at least one of the API calls fails, then all the calls fail:
created () {
axios.all([
HTTP.get('/get-countries-list'),
HTTP.get('/get-companies-industries-list'),
HTTP.get('/get-companies-sizes-list'),
HTTP.get('/get-companies-types-list'),
HTTP.get('/get-contacts-industries-list'),
HTTP.get('/get-contacts-functions-list'),
HTTP.get('/get-contacts-levels-list')
])
// If all requests succeed
.then(axios.spread(function (
// Each response comes from the get query above
countriesResp,
companyIndustriesResp,
companySizesResp,
companyTypesResp,
contactIndustriesResp,
contactFunctionsResp,
contactLevelsResp
) {
// Put countries retrieved from API into an array available to Vue.js
this.countriesAreLoading = false
this.countries = []
for (let i = countriesResp.data.length - 1; i >= 0; i--) {
this.countries.push(countriesResp.data[i].countryName)
}
// Remove France and put it at the top for convenience
let indexOfFrance = this.countries.indexOf('France')
this.countries.splice(indexOfFrance, 1)
// Sort the data alphabetically for convenience
this.countries.sort()
this.countries.unshift('France')
// Put company industries retrieved from API into an array available to Vue.js
this.companyIndustriesAreLoading = false
this.companyIndustries = []
for (let i = companyIndustriesResp.data.length - 1; i >= 0; i--) {
this.companyIndustries.push(companyIndustriesResp.data[i].industryName)
}
this.companyIndustries.sort()
[...]
}
// bind(this) is needed in order to inject this of Vue.js (otherwise
// this would be the axios instance)
.bind(this)))
// In case one of the get request failed, stop everything and tell the user
.catch(e => {
alert('Could not load the full input lists in form.')
this.countriesAreLoading = false
this.companyIndustriesAreLoading = false
this.companySizesAreLoading = false
this.companyTypesAreLoading = false
this.contactIndustriesAreLoading = false
this.contactFunctionsAreLoading = false
this.contactLevelsAreLoading = false
})
},
Generate CSV in Javascript
I wish there was a straightforward solution in order to create a CSV in javascript and serve it to the user as a download, but it seems there isn’t, so here is my solution:
generateCSV: function () {
let csvArray = [
'data:text/csv;charset=utf-8,' +
'Company Id;' +
'Company Name;' +
'Company Domain;' +
'Company Website;' +
[...]
'Contact Update Date'
]
this.resultsRows.forEach(function (row) {
let csvRow = row['compId'] + ';' +
row['compName'] + ';' +
row['compDomain'] + ';' +
row['compWebsite'] + ';' +
[...]
row['contUpdatedOn']
csvArray.push(csvRow)
})
let csvContent = csvArray.join('\r\n')
let encodedUri = encodeURI(csvContent)
let link = document.createElement('a')
link.setAttribute('href', encodedUri)
link.setAttribute('download', 'companies_and_contacts_extracted.csv')
document.body.appendChild(link)
link.click()
}
}
Get Data Sent by Axios in Go
Axios’ POST data are necessarily sent as JSON. Unfortunately currently there is no way to change this. Go has a useful PostFormValue function that easily retrieves POST data encoded as form data but unfortunately it does not handle JSON encoded data, so I had to unmarshal JSON to a struct in order to retrieve POST data:
body, err := ioutil.ReadAll(r.Body)
if err != nil {
err = CustErr(err, "Cannot read request body.\nStopping here.")
log.Println(err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Store JSON data in a userInput struct
var userInput UserInput
err = json.Unmarshal(body, &userInput)
if err != nil {
err = CustErr(err, "Cannot unmarshall json.\nStopping here.")
log.Println(err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
Variadic Functions in Go
The user can enter a variable number of criteria that will be used within a single SQL query. Basically, each new criteria is a new SQL WHERE
clause. As we do not know in advance how many parameters will be passed to the database/sql query()
function, we need to use the variadic property of the query()
function here. A variadic function is a function that accepts a variable number of parameters. In Python you would use *args
or *kwargs
. Here we’re using the ...
notation. The first argument of query()
is a string SQL query, and the second argument is an array of empty interfaces that contains all the parameters:
rows, err := db.Query(sqlStmtStr, sqlArgs...)
if err != nil {
err = CustErr(err, "SQL query failed.\nStopping here.")
log.Println(err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return compAndContRows, err
}
defer rows.Close()
Managing CORS
Basically, CORS is a security measure that prevents frontend from retrieving data from a backend that is not located at the same URL. Here is a nice explanation of why CORS is important. In order to comply with this behaviour you should handle CORS properly on the API server side. The most important CORS property to be set is the Allowed Origins
property. It’s not that easy to handle it in Go since it implies first a “preflight” request (using HTTP OPTION
) and then setting the proper HTTP headers.
The best solution in Go in my opinion seems to be the rs/cors library that allows us to handle CORS like this:
router := mux.NewRouter()
c := cors.New(cors.Options{
AllowedOrigins: []string{"http://localhost:8080"},
})
handler := c.Handler(router)
NULL Values in Go
When making SQL requests to db, you’ll probably get some NULL values. Those NULL values must be handled explicitly in Go, especially if you want to marshal those results to JSON. You have 2 solutions:
- use pointers for nullable values in your struct that will receive values. It works but NULL values are not detected by the
'omitempty'
keyword during JSON marshaling so an empty string will still be displayed in the JSON result.
- use the sql lib nullable types: replace
string
with sql.NullString
, int
with sql.NullInt64
, bool
with sql.NullBool
, and time
with sql.NullTime
but then you obtain something like {"Valid":true,"String":"Smith"}
which is not directly ok in JSON. So it requires extra steps before marshaling to JSON.
I implemented the 2nd option and created a custom type + method that implements the json.Marshaler. Note that, by using this method, I could have turned NULL into an empty string so that it is not included in the final JSON, but here I wanted the NULL values to be kept and sent to frontend in JSON as null
:
type JsonNullString struct {
sql.NullString
}
func (v JsonNullString) MarshalJSON() ([]byte, error) {
if v.Valid {
return json.Marshal(v.String)
} else {
return json.Marshal(nil)
}
}
type CompAndContRow struct {
CompId string `json:"compId"`
CompName JsonNullString `json:"compName"`
CompDomain JsonNullString `json:"compDomain"`
CompWebsite JsonNullString `json:"compWebsite"`
[...]
}
Concatenation of Multiple Rows in SQL
SQL is a very old but still very powerful language. In addition to that, PostgreSQL provides us with very useful functions that allow us to do a lot of things within SQL instead of applying scripts to the results (which is not memory/CPU efficient). Here I have quite a lot of SQL LEFT JOIN
that return a lot of very similar rows. Problem is I want some of these rows to be concatenated within one single row. For example, a company can have multiple emails and I want all these emails to appear in the same row separated by this symbol: ¤
. Doing this in Go would mean parsing the array of SQL results a huge number of time. In case of millions of rows it would be very long and even crash if the server does not have enough memory. Fortunately, doing it with PostgreSQL is very easy using the string_agg()
function combined with GROUP BY
and DISTINCT
:
SELECT comp.id, string_agg(DISTINCT companyemail.email,'¤')
FROM company AS comp
LEFT JOIN companyemail ON companyemail.company_id = comp.id
WHERE comp.id = $1
GROUP BY comp.id
Conclusion
I’m covering a wide range of topics inside a single article here: Go, Vue.js, Javascript, SQL, Docker, Nginx… I hope you found useful tips that you’ll be able to reuse in you own application.
If you have questions about the app feel free to ask. If you think I could have optimized better some parts of this code, I would love to hear it. This article is also for me a way to get critical feedbacks and question my own work!