Development

Install SSL TLS in your development machine

This one took me a lot of time to get my head over!

Introduction

Openssl is a TLS/SSL and crypto library https://www.openssl.org: it has many commands and lot of configuration options.
openssl is a command line tool that can be used for:

  • Creation of key parameters
  • Creation of X.509 certificates, CSRs and CRLs
  • Calculation of message digests
  • Encryption and decryption
  • SSL/TLS client and server tests
  • Handling of S/MIME signed or encrypted mail
  • And more…

Terminology

CA = Certificate Authority: entity that issues digital certificates

CSR = Certificate Signing Request

PKI = Public Key Infrastructure

PEM = Privacy Enhanced Mail

X509 File Extensions

The first thing we have to understand is what each type of file extension is.   There is a lot of confusion about what DER, PEM, CRT, and CER are and many have incorrectly said that they are all interchangeable.  While in certain cases some can be interchanged the best practice is to identify how your certificate is encoded and then label it correctly.  Correctly labeled certificates will be much easier to manipulat

Encodings (also used as extensions)

  • .DER = The DER extension is used for binary DER encoded certificates. These files may also bear the CER or the CRT extension.   Proper English usage would be “I have a DER encoded certificate” not “I have a DER certificate”.
  • .PEM = The PEM extension is used for different types of X.509v3 files which contain ASCII (Base64) armored data prefixed with a “—– BEGIN …” line.

Common Extensions

  • .CRT = The CRT extension is used for certificates. The certificates may be encoded as binary DER or as ASCII PEM. The CER and CRT extensions are nearly synonymous.  Most common among *nix systems
  • CER = alternate form of .crt (Microsoft Convention) You can use MS to convert .crt to .cer (.both DER encoded .cer, or base64[PEM] encoded .cer)  The .cer file extension is also recognized by IE as a command to run a MS cryptoAPI command (specifically rundll32.exe cryptext.dll,CryptExtOpenCER) which displays a dialogue for importing and/or viewing certificate contents.
  • .KEY = The KEY extension is used both for public and private PKCS#8 keys. The keys may be encoded as binary DER or as ASCII PEM.

The only time CRT and CER can safely be interchanged is when the encoding type can be identical.  (ie  PEM encoded CRT = PEM encoded CER)

First lets set up your CA

Create a folder to hold our config and keys: sudo mkdir /etc/apache2/ssl && cd /etc/apache2/ssl

Create a basic configuration file: touch openssl-ca.cnf

Put this inside sudo nano openssl-ca.cnf:

# openssl-ca.cnf

HOME = .
RANDFILE = $ENV::HOME/.rnd

####################################################################
[ ca ]
default_ca = CA_default # The default ca section

[ CA_default ]

default_days = 1000 # how long to certify for
default_crl_days = 30 # how long before next CRL
default_md = sha256 # use public key default MD
preserve = no # keep passed DN ordering

x509_extensions = ca_extensions # The extensions to add to the cert

email_in_dn = no # Don't concat the email in the DN
copy_extensions = copy # Required to copy SANs from CSR to cert

####################################################################
[ req ]
default_bits = 4096
default_keyfile = cakey.pem
distinguished_name = ca_distinguished_name
x509_extensions = ca_extensions
string_mask = utf8only

####################################################################
[ ca_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = MA

stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default =Drâa‐Tafilalet

localityName = Locality Name (eg, city)
localityName_default = Ouarzazate

organizationName = Organization Name (eg, company)
organizationName_default = Test CA, Limited

organizationalUnitName = Organizational Unit (eg, division)
organizationalUnitName_default = Server Research Department

commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = Test CA

emailAddress = Email Address
emailAddress_default = test@mydomain.dev

####################################################################
[ ca_extensions ]

subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints = critical, CA:true
keyUsage = keyCertSign, cRLSign

Then, execute the following. The -nodes omits the password or passphrase so you can examine the certificate.

$ openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM

After the command executes, cacert.pem will be your certificate for CA operations, and cakey.pem will be the private key. Recall the private key does not have a password or passphrase.

You can dump the certificate with the following:

$ openssl x509 -in cacert.pem -text -noout

And test its purpose with the following:

$ openssl x509 -purpose -in cacert.pem -inform PEM

Second: lets sign an end entity certificate (a.k.a server or user)

For part two, I’m going to create another conf file that’s easily digestible.

First, touch the openssl-server.cnf (you can make one of these for user certificates also).

$ touch openssl-server.cnf

Then open it and add the following.

#openssl-server.cnf
HOME = .
RANDFILE = $ENV::HOME/.rnd

####################################################################
[ req ]
default_bits = 2048
default_keyfile = serverkey.pem
distinguished_name = server_distinguished_name
req_extensions = server_req_extensions
string_mask = utf8only

####################################################################
[ server_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = US

stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = MD

localityName = Locality Name (eg, city)
localityName_default = Baltimore

organizationName = Organization Name (eg, company)
organizationName_default = Test CA, Limited

commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = Test CA

emailAddress = Email Address
emailAddress_default = test@mydomain.dev

####################################################################
[ server_req_extensions ]

subjectKeyIdentifier = hash
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = @alternate_names
nsComment = "OpenSSL Generated Certificate"

####################################################################
[ alternate_names ]

DNS.1 = mydomain.dev
DNS.2 = www.mydomain.dev
DNS.3 = mail.mydomain.dev
DNS.4 = ftp.mydomain.dev

# IPv4 localhost
IP.1 = 127.0.0.1

# IPv6 localhost
IP.2 = ::1

Then, create the server certificate request.

Be sure to omit -x509*. Adding -x509 will create a certifcate, and not a request.
$ openssl req -config openssl-server.cnf -newkey rsa:2048 -sha256 -nodes -out servercert.csr -outform PEM

After this command executes, you will have a request in servercert.csr and a private key in serverkey.pem.

And you can inspect it again.

$ openssl req -text -noout -verify -in servercert.csr

Next, you have to sign it with your CA.

You are almost ready to sign the server’s certificate by your CA. The CA’s openssl-ca.cnf  needs two more sections before issuing the command.

First, open openssl-ca.cnf and add the following two sections:

####################################################################
[ signing_policy ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional

####################################################################
[ signing_req ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment

Second, add the following to the [ CA_default ] section of openssl-ca.cnf. I left them out earlier because they can complicate things (they were unused at the time). Now you will see how they are used, so hopefully they will make sense.

base_dir = .
certificate = $base_dir/cacert.pem # The CA certifcate
private_key = $base_dir/cakey.pem # The CA private key
new_certs_dir = $base_dir # Location for new certs after signing
database = $base_dir/index.txt # Database index file
serial = $base_dir/serial.txt # The current serial number

unique_subject = no # Set to 'no' to allow creation of
# several certificates with same subject.

Third, touch index.txt and serial.txt:

$ touch index.txt
$ echo '01' > serial.txt

Then, perform the following:

$ openssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out servercert.pem -infiles servercert.csr

You should confirm the sign of the certificate.

After the command executes, you will have a freshly minted server certificate in servercert.pem. The private key was created earlier and is available in serverkey.pem .

Finally, you can inspect your freshly minted certificate with the following:
$ openssl x509 -in servercert.pem -text -noout

Add the certification to your virtual host

Edit your virtual host:

sudo nano /etc/apache2/sites-available/mydomain.dev.conf

<VirtualHost *:443>
    ServerName mydomain.dev
    SSLEngine on
    SSLCertificateFile /etc/apache2/ssl/servercert.pem
    SSLCertificateKeyFile /etc/apache2/ssl/serverkey.pem
    SSLCertificateChainFile /etc/apache2/ssl/cacert.pem
    DocumentRoot /var/www/mydomain/web
    <Directory /var/www/mydomain/web>
        AllowOverride None
        Order Allow,Deny
        Allow from All
        <IfModule mod_rewrite.c>
            Options -MultiViews
            RewriteEngine On
            RewriteCond %{REQUEST_FILENAME} !-f
            RewriteRule ^(.*)$ app_dev.php [QSA,L]
            RewriteEngine On
            RewriteCond %{HTTP:Authorization} ^(.*)
            RewriteRule .* - [e=HTTP_AUTHORIZATION:%1]
        </IfModule>
    </Directory>
    <Directory /var/www/mydomain/>
        Options FollowSymlinks
    </Directory>
    <Directory /var/www/mydomain/web/bundles>
        <IfModule mod_rewrite.c>
            RewriteEngine Off
        </IfModule>
    </Directory>
    ErrorLog ${APACHE_LOG_DIR}/mydomain.dev_error.log
    CustomLog ${APACHE_LOG_DIR}/mydomain.dev_access.log combined
</VirtualHost>

Add the certificate to Google Chrome

See this steps: https://css-tricks.com/trusting-ssl-locally-mac/

  1. Open chrome://settings-frame/
  2. Scroll down the page and click on: Show advanced settings…
  3. Scroll down and open: Manage certificates on the HTTPS/SSL section
  4. The Keychain Access screen will be displayed. Chrome uses the Keychain Access utility built into MAC OS manage digital certificate
  5. Under ‘Keychains’ on the left, select ‘Login’ and click ‘My Certificates’ in the ‘Category’ column.

Final notes

Earlier, you added the following to CA_default: copy_extensions = copy. This copies extension provided by the person making the request.

If you omit copy_extensions = copy, then your server certificate will lack the Subject Alternate Names (SANs) like www.mydomain.dev and mail.mydomain.dev.

If you use copy_extensions = copy but don’t look over the request, then the requester might be able to trick you into signing something like a subordinate root (rather than a server or user certificate). Which means he will be able to mint certificates that chain back to your trusted root. Be sure to verify the request with openssl req -verify before signing.

If you omit unique_subject or set it to yes, then you will only be allowed to create one certificate under the subject’s distinguished name.
unique_subject = yes # Set to 'no' to allow creation of
# several ctificates with same subject.

Trying to create a second certificate while experimenting will result in the following when signing your server’s certificate with the CA’s private key:
Sign the certificate? [y/n]:Y
failed to update database
TXT_DB error number 2

So unique_subject = no is perfect for testing.


If you want to ensure the Organizational Name is consistent between self-signed CAs, Subordinate CA and End-Entity certificates, then add the following to your CA configuration files:
[ policy_match ]
organizationName = match

If you want to allow the Organizational Name to change, then use:
[ policy_match ]
organizationName = supplied


There are other rules concerning the handling of DNS names in X.509/PKIX certificates. Refer to these documents for the rules:

RFC 6797 and RFC 7469 are listed because they are more restrictive than the other RFCs and CA/B documents. RFC’s 6797 and 7469 do not allow an IP address, either.

Thanks to Jeff at stackoverflow for his inspiring post

Linux: awesome helping commands

Search all log files for the word debug case insensitive

$ grep -i debug /var/www/html/register_by_email/var/logs/*.log

Delete all files except one:

$ sudo ls | grep -v 000-default.conf | sudo xargs rm

The same but ignore errors:

$ sudo rm /etc/apache2/ssl/*.* &>/dev/null && cd /etc/apache2/sites-available/ && sudo ls | grep -v 000-default.conf | sudo xargs rm &>/dev/null && cd /etc/apache2/sites-enabled/ && sudo ls | grep -v 000-default.conf | sudo xargs rm &>/dev/null

Advanced copy of files using regular expressions: from a folder with csv files named using a date format like DK20170514.csv this command will copy only files with date from 14 may to 12 june.

$ sudo find -regextype posix-extended -regex '.*/??20170(5(1[4-9]|2[0-9])|6(0[0-9]|1[0-2]))\.csv' | xargs cp -t ../../../special/41393/ &>/dev/null

Note the .*/ in the beginning of the regex: find needs this to work!

Watch how crons work

watch -n 0.1 'ls -1 | wc -l'

Copy a log file from remote to local using ssh

scp -i ~/.ssh/abdel some-server:/var/www/html/myapp/var/logs/dev.log local_dev.log

Display end of file: useful for live debugging single or multiple log files

tail -n 4 -f var/logs/*.*

Debugging GNUPG

I have this error on my server:
PHP Startup: Unable to load dynamic library '/usr/lib/php/20160303/gnupg.so'

And this is how I fixed it:

Short answer

Remove old php dev and install a new one that fits your php version:

sudo pecl uninstall gnupg
sudo apt-get remove php5.6-dev
sudo apt-get install php7.1-dev
sudo pecl install gnupg

TL;DR

Extension dir for php 5.6 is `/usr/lib/php/20131226` and extension dir for php 7.0 is `/usr/lib/php/20151012` as this command shows:

php -r "print phpinfo();" | grep "extension_dir"

Pecl installs gnupg in `/usr/lib/php/20131226/gnupg.so` because the pecl was installed when php 5.6 is enabled

pecl list-files gnupg

**Conclusion:**

PHP 7.0 uses a different extension directory than where the gnupg is installed.

**First try which didn’t work**:

Create symlink for gnup.so inside the php 7.0 extension directory that points to gnup.so inside php 5.6

sudo ln -s /usr/lib/php/20131226/gnupg.so /usr/lib/php/20151012/gnupg.so

**Results in**:

Warning: PHP Startup: gnupg: Unable to initialize module
Module compiled with module API=20131226
PHP compiled with module API=20151012

**Second try which also didn’t work**:

1. Uninstall pecl extension: `sudo pecl uninstall gnupg`
2. Activate php v 7.0
3. Install gnupg again: `sudo pecl install gnupg`

Gives same compiling error.

**Other try and error**:

Install a compiled version of the gnupg that works with php 7.0: see php [docs here][1]

Download latest pecl extension source from https://github.com/php-gnupg/php-gnupg

Find which configuration file is used by your PHP 7.0 installation

php --ini
Configuration File (php.ini) Path: /etc/php/7.0/cli

Compile the pecl extension:
cd {downloaded extension folder}
phpize
./configure –with-php-config=/home/vagrant/Downloads/php-src-PHP-7.0/scripts/php-config

**Check if gnupg is installed**

php -r ‘var_dump(function_exists(“gnupg_decrypt”));’;
[1]: http://php.net/manual/en/install.pecl.phpize.php

Compile php dev source
Download: https://github.com/php/php-src/tree/PHP-7.0
Create configuration build: ./buildconf
Init a config: ./configure –prefix=/usr/local/php7/7.0.0 –localstatedir=/usr/local/var –sysconfdir=/usr/local/etc/php/7 –with-config-file-path=/usr/local/etc/php/7 –with-config-file-scan-dir=/usr/local/etc/php/7/conf.d –mandir=/usr/local/php7/7.0.0/share/man

You can then check out the branch you want to build, for example:

PHP 5.4: git checkout PHP-5.4
PHP 5.5: git checkout PHP-5.5
PHP 5.6: git checkout PHP-5.6
PHP 7.0: git checkout PHP-7.0
PHP HEAD: git checkout master

Using Putty to connect to an SHH tunnel

If in Linux you use this command to make a tunnel:

ssh tunnel@my.example.de -p 32642 -L 3308:something:3306 -N -i ~/.ssh/id_rsa

Then this is how you translate the above command into a Putty configuration session.

Download Putty and install it:

https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

Configure Putty:
  1. In the “Session” category:
    1. Create a new session by typing its name in “Saved Sessions”.
    2. Fill “Host name or IP address” (my.example.de) and “Port” (32642) and make “connection type” SSH
  2. Go to “Connection” => “Data”:
    1. Set “Auto-login username” (tunnel)
  3. Go to “Connection” => “SSH”:
    1. Check “Don’t start a shell or command at all”
  4. Go to “Connection” => “SSH” => “Auth”:
    1. Click on “Browse..” and load a ppk key (or convert other private keys using the included Puttygen software)
  5. Go to “Connection” => “SSH” => “Tunnels”:
    1. Check “Local ports accept connections from other hosts”
    2. Fill “Source port” (3308) & “Destination” (something:3306) and click on “Add”
  6. Go to “Session” and save this config and click on “Open” to start the session

Test if this port is connected by running CMD as admin and type netstat -a -b you should see a that port 3308 used.

Next time you want to open a tunnel, open Putty and double click on the name of your saved session and voila!

Connecting to MySQL through SSH tunnel

In this post we see how to connect to a MySQL server using SSH tunnel and local forwarding.

This command will create a tunnel in the background:

ssh tunnel@example.example.com -p 32642 -L 3308:example:3306 -N -i ~/.ssh/abdel -f
mysql -h 127.0.0.1 -P 3308 -u user -p db

This is what each parameter means:

-p port
Port to connect to on the remote host. This can be specified on
a per-host basis in the configuration file.

-L [bind_address:]port:host:hostport
Specifies that the given port on the local (client) host is to be
forwarded to the given host and port on the remote side. This
works by allocating a socket to listen to port on the local side,
optionally bound to the specified bind_address. Whenever a con-
nection is made to this port, the connection is forwarded over
the secure channel, and a connection is made to host port
hostport from the remote machine. Port forwardings can also be
specified in the configuration file. IPv6 addresses can be spec-
ified with an alternative syntax:
[bind_address/]port/host/hostport or by enclosing the address in
square brackets. Only the superuser can forward privileged
ports. By default, the local port is bound in accordance with
the GatewayPorts setting. However, an explicit bind_address may
be used to bind the connection to a specific address. The
bind_address of “localhost” indicates that the listening port be
bound for local use only, while an empty address or ‘*’ indicates
that the port should be available from all interfaces.

-N Do not execute a remote command. This is useful for just for-
warding ports (protocol version 2 only).

-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or passphrases,
but the user wants it in the background. This implies -n.
The recommended way to start X11 programs at a remote site
is with something like ssh -f host xterm.

-i identity_file
Selects a file from which the identity (private key) for public
key authentication is read. The default is ~/.ssh/identity
for protocol version 1, and ~/.ssh/id_dsa, ~/.ssh/id_ecdsa,
~/.ssh/id_ed25519 and ~/.ssh/id_rsa for protocol version 2.
Identity files may also be specified on a per-host basis in the
configuration file. It is possible to have multiple -i options
(and multiple identities specified in configuration files).
ssh will also try to load certificate information from the
filename obtained by appending -cert.pub to identity filenames.

 

To check if the command was successful run: sudo netstat -tulpn | grep "3308"

You should see something like:

$ sudo netstat -tulpn | grep "3308"
tcp 0 0 127.0.0.1:3308 0.0.0.0:* LISTEN 14634/ssh
tcp6 0 0 ::1:3308 :::* LISTEN 14634/ssh

Using an Ansible role

To list jobs interacting with ssh: initctl list | grep ssh

To stop the service:
stop autossh-tunnel-client

 

My own Ansible role:

---
- name: Install package
apt:
name: ssh
state: present
become: yes

- name: Copy key file(s)
copy:
src: "{{ item.src }}"
dest: "{{ item.dest | default(item.src | basename) }}"
owner: "{{ item.owner | default('root') }}"
group: "{{ item.group | default(item.owner) | default('root') }}"
mode: "{{ item.mode | default('0600') }}"
validate: 'echo %s'
with_items: "{{ params['ssh_tunnel'].keys_map }}"
become: yes

- name: Run SSH tunnel on background
command: ssh -f "{{ params['ssh_tunnel'].user }}"@"{{ params['ssh_tunnel'].host }}" -p "{{ params['ssh_tunnel'].port }}" -L "{{ params['ssh_tunnel'].forward }}" -N -i "{{ item.dest | default(item.src | basename) }}"
with_items: "{{ params['ssh_tunnel'].key_map }}"
become: yes

Parameters:

ssh_tunnel:
keys_map:
- src: '../../../private/id_rsa'
dest: ~/.ssh/id_rsa
host: 'some.domain.com'
forward: '3308:some:3306'
port: 32642
user: tunnel

 

Check as well: Using Putty to connect to an SHH tunnel

Vagrant: Relative paths in Ansible roles

The default path to the current working directory for any role is the role directory itself!

So if you reference a file inside of a role task: lets say the copy module for example, and the file is not in that role directory then you have to use parent path .. /.

Directory structure:

D:/
├── devenv/
│   └── ansible
│       └── roles
│           └── autossh
│               └── tasks
│                   └── main.yml
├─────── private/
│        └── myfile.txt
└────── Vagrantfile

In the case the cwd is autosh and the relative path to the file is: ../../../private/myfile.txt

You cannot use relative paths that are outside the Vagrantfile folder: when you do vagrant up it maps the directory to one in the VM and there it references the files.. so no files outside that directory will work if it doesn’t exist already in the VM.

Plentymarkets: the worst shopping system ever!

At my work we deal often with shopping systems to get orders or customer data and sent it to our API. Most of time using the different shopping systems APIs is easy and quick.. Except for Plentymarkets: it is a nightmare!!!

Plentymarkets newsletter image

While I don’t want to spend more time writing about my experience about this shopping system (because I spent already too much time with it trying to fix small problems) I will just make a quick bullet list of bugs/issues I had with them:

  1. The keep changing the API so often: the changes are big that it breaks your code!
  2. Their documentation is shit and most of it is auto generated.
  3. They send you emails with non working links!
  4. In order to change the shop admin password, you need to call them!! They don’t have a reset password link!
  5. In other shopping systems you have access to a demo account where you can test your API calls, with Plentymarkets they only give you a limited 1 month account that you have to pay for afterwards!
  6. They are advertising the use of REST API instead of SOAP but it never happened! The docs are confusing :/
  7. The admin interface UI and design is awful and not user friendly. Serious shop system some some time and money to get a better UX, unfortunately, it seems not the case with Plentymarkets.
  8. While using a PHP framework is a good idea, Plentymarkets is using Laravel to power up (at least) their REST API: Larevl is based on Symfony, which is much more greater.
    However when I was playing with the rest/orders endpoint I got this uncatched error exception:

    {
    "message": "Type error: Argument 1 passed to Illuminate\\Validation\\Factory::make() must be of the type array, integer given, called in /var/www3/plenty/stable7_a/pl/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php on line 219",
    "status_code": 500
    }
  9. TBC..

A little history about Plentymarkets:
It started out as a contract job for a few eBay PowerSellers. The development began in 2001 and initially focused on providing an interface to eBay, the ability to process orders and an online store. Back then, the software was still called plentyShop.
It developed into a shop system very quickly and then it became an all-in-one solution for managing important e-commerce processes such as B2B and B2C, checkout, content management, invoicing, stock management, after sales management, fulfillment and returns.

Hide password on MinTTY and Cygwin

If you want to use SMB folder sharing for vagrant on a windows machine, you will need to run the CLI as administrator and then provide the login and password. The problem is that MinTTY and Cygwin do not support the no-echo TTY mode.

To fix this I use this code snippet:

# Capture username and password for smb synced folders (Windows)
if OS.windows?
   puts "\e[36mDue to MinTTY and Cygwin not supporting the no-echo TTY mode"
   puts "it is necessary to request your account username and password"
   puts "at this stage in order to correctly setup SMB shared folders.\e[0m"
   puts

   if !parameters['smb_username']
      print "\e[35mEnter username:\e[0m "
      STDOUT.flush
      smb_username = STDIN.gets.chomp
   else
      puts "\e[32mUser name is already supplied in parameters.\e[0m"
      puts
      smb_username = parameters['smb_username']
   end

   if !parameters['smb_password']
      # 8m is the control code to hide characters
      print "\e[35mEnter password:\e[0m \e[0;8m"
      STDOUT.flush
      smb_password = STDIN.gets.chomp
      # 0m is the control code to reset formatting attributes
      puts "\e[0m"
      STDOUT.flush
   else
      puts "\e[32mUser password is already supplied in parameters.\e[0m"
      puts
      smb_password = parameters['smb_password']
   end
end

Fix ERR_ICANN_NAME_COLLISION

I have this vagrant setup where I map my shared folder to a folder where I have all my projects. In windows 10 I got this strange problem from Chrome showing

ERR_ICANN_NAME_COLLISION

After many researches I found out the problem was on the hosts file;

I just had to write each entry in a new line and save; that stupidly fixed it!

Add subdomain to a Let’s Encrypt SSL certificate

First of all, add an entry to the DNS zone; you can do this in your host panel.

Pointer records = A

Target = IP of the server

 

Remove existing one with this command:

sudo rm -rf /etc/letsencrypt/live/example.com

sudo rm /etc/letsencrypt/renewal/example.com.conf

sudo rm -rf /etc/letsencrypt/archive/example.com

Stop server:

sudo service nginx stop

Create a new one certificate:

cd /opt/letsencrypt
./letsencrypt-auto certonly --standalone

Then add all domaines there separated by comma;

example.com www.example.com subdomain1.example.com sub2.example.com

Restart server:

sudo service nginx restart

Update nginx sites

sudo rm /etc/nginx/sites-enabled/example.com

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/