Friday, April 2, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Windows 10 unique signature

Posted: 02 Apr 2021 09:23 PM PDT

I'm intending to use a combination of a Windows 10 unique signature and account to identify my app' user so that they cannot use multiple account on a same Windows installation even when they remove everything from my app and reinstall.

Is there a signature of Windows 10 that will CHANGE only after reinstall windows?

I have read many threads that suggest using the MachineGUID registry, but this value can be changed by user. Are there any other cannot be changed by user?

MySQL server won't accept remote connections

Posted: 02 Apr 2021 09:20 PM PDT

I have a server (CentOS 7) running a fresh install of MySQL server (14.14 Distrib 5.6.51) with a dummy database. I can connect from within the server and from remote computers in the network using an ssh tunnel. Sadly, I can't use ssh tunnels for prod and I really need this server to allow remote connections.

So far, I have:

I'm completely out of ideas. What am I missing?

Restricting OpenVPN connection to only one program

Posted: 02 Apr 2021 09:04 PM PDT

My Linux server works with YouTube videos a lot, downloading and collecting data about them with the help of youtube-dl.

I want to route this program (youtube-dl) through an OpenVPN connection. Seems simple enough, except I need only youtube-dl to use the VPN connection, as I also run a web server among other things which cannot use the VPN connection.

How can I configure my server to only use the OpenVPN connection for youtube-dl?

Without a separate VM.

VLAN trunking on Juniper EX distribution / access switches

Posted: 02 Apr 2021 04:55 PM PDT

Let 3 switches : Faculty, Physics and Chemistry. Port 0 of Faculty connects to port 0 of Physics and port 1 of Faculty connects to port 0 of Chemistry. In this example, Faculty is the distribution switch and the other switches are the access switches. Let the following configuration :

Faculty :

set vlan SWITCHES vlan-id 100 l3-interface vlan.100  set vlan SWITCHES interface ge-0/0/0  set vlan SWITCHES interface ge-0/0/1  set interface vlan unit 100 family inet address 5.5.0.1/24  set routing-options static route 5.5.1.0/24 next-hop 5.5.0.2  set routing-options static route 5.5.2.0/24 next-hop 5.5.0.3  

Physics :

set interface ge-0/0/0 unit 0 family inet address 5.5.0.2/24  set routing-options static route 0.0.0.0/0 next-hop 5.5.0.1  set vlans COMPUTERS vlan-id 100 l3-interface vlan.100  set interface vlan unit 100 family inet address 5.5.1.1/24  

Chemistry :

set interface ge-0/0/0 unit 0 family inet address 5.5.0.3/24  set routing-options static route 0.0.0.0/0 next-hop 5.5.0.1  set vlans COMPUTERS vlan-id 100 l3-interface vlan.100  set interface vlan unit 100 family inet address 5.5.2.1/24  

How do we configure VLAN trunking so that Physics and Chemistry can both use IP addresses from subnets 5.5.1.0/24 and 5.5.2.0/24?

I analyzed the Juniper documentation :

https://www.juniper.net/documentation/en_US/junos/topics/example/bridging-ex-series-connecting-access-to-distribution-switch.html

From their example, what I'm struggling with is :

set interfaces vlan unit 0 family inet address 192.0.2.2/25  set interfaces vlan unit 1 family inet address 192.0.2.130/25  

This will result in IP addresses 192.0.2.2 and 192.0.2.130 being "reserved" for the switch. In my example, IP addresses 5.5.1.2 and 5.5.2.2 should remain usable for members of the COMPUTERS VLAN. Prior to the VLAN trunking, IP addresses .0, .1 and .255 of subnets 5.5.1.0/24 and 5.5.2.0/24 would be reserved for the network, gateway and broadcast addresses, respectively. Does this have to change after establishing the trunk?

Using 'netsh interface ip set address' to override another adapter's address?

Posted: 02 Apr 2021 04:39 PM PDT

I'm one of the developers of the OpenConnect VPN client.

We're trying to modernize and improve our support for Windows. We use a user-customizable JScript called vpnc-script-win.js to configure IPv4/6 addresses, routing, and DNS based on the configuration sent by the VPN server.

One of the things this script needs to do is to set specific IP addresses for the VPN network interface. We use the command netsh interface ip set address to do this for IPv4, like so:

        run("netsh interface ip set address \"" + env("TUNIDX") + "\" static " +              env("INTERNAL_IP4_ADDRESS") + " " + env("INTERNAL_IP4_NETMASK") + " store=active");  

One of the limitations of the command is that it fails if the IP address in question is "in use" by another adapter (even if that adapter is not presently up-and-running):

screenshot

What I'd like to know is if there is any way to override this behavior, and force the setting of an IP address that conflicts with the address set on another adapter. I haven't been able to find anything in Microsoft documentation that explains a way to do this programmatically.

However, from configuring network interfaces on Windows using the Control Panel GUI, I know that it is in fact possible to configure conflicting IP addresses on Windows, after clicking through a brief warning.

So: is there a way to set an IP address for an adapter on Windows even if another adapter is (allegedly) using that address? Ideally, this will be done using command-line tools, since we prefer to keep all of the IP/routing/DNS configuration in this user-customizable script rather than in our binary code.

Confusion about nginx cache status hit after stale when upstream is down

Posted: 02 Apr 2021 04:24 PM PDT

My upstream server is down, nginx is configured to serve stale content .. so after cache expires ... In the first request we see the response header for $upstream_cache_status as STALE. However the next request .. we see a response of HIT.

I though that all requests will be STALE since the upstream server is down. AM I correct or am I missing something ...

Here is my config

         add_header 'Cache-Control' "no-cache";           proxy_ignore_headers Cache-Control;           proxy_hide_header Cache-Control;           proxy_cache aaa_cache;             proxy_cache_lock on;           proxy_cache_lock_timeout 60s;           proxy_cache_lock_age 15s;           proxy_cache_revalidate on;           add_header X-Cache-Status $upstream_cache_status;           proxy_cache_valid 200 3m;           proxy_cache_valid any 30s;           proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;           proxy_cache_background_update on;  

Thanks,

Apache Site Goes To Wrong Page If I Manually Enter HTTPS:// in the URL

Posted: 02 Apr 2021 09:27 PM PDT

If I type in example.com into the URL it redirects to https://www.example.com which is great, same with typing http://example.com I get the correct redirect to https://www.example.com.

However if I type in https://example.com it takes me to the warning SSL Certificate invalid page. After checking what the domain the certificate was for I realised that it's taking me to completely different website I have on the server(the same site that comes up if I type my VPS IP Address directly into the URL.)

Heres my config file:

<VirtualHost *:80>      ServerName example.com      Redirect permanent / https://www.example.com/  </VirtualHost>    <IfModule mod_ssl.c>  <VirtualHost *:443>            ServerName www.example.com          ServerAlias *.example.com          ServerAdmin noreply@example.com          DocumentRoot /var/www/html            ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined            Alias /static /home/user/example/static          <Directory /home/user/example/static>                  Require all granted          </Directory>            <Directory /home/user/example/example>                  <Files wsgi.py>                          Require all granted                  </Files>          </Directory>            WSGIScriptAlias / /home/user/example/example/wsgi.py          WSGIDaemonProcess exsite python-path=/home/user/example python-home=/home/user/example/venv          WSGIProcessGroup exsite      RewriteEngine On  RewriteCond %{HTTP_HOST} ^example\.com$ [NC]  RewriteRule ^ https://www.example.com%{REQUEST_URI} [R=301,L]    SSLCertificateFile /etc/letsencrypt/live/www.example.com/fullchain.pem  SSLCertificateKeyFile /etc/letsencrypt/live/www.example.com/privkey.pem  Include /etc/letsencrypt/options-ssl-apache.conf  </VirtualHost>  </IfModule>  

Drive Mapping through GPO on few Servers

Posted: 02 Apr 2021 06:16 PM PDT

I have a requirement to fulfill related to Drive mapping through GPO.

We have a domain (let's call it contoso.com). Under contoso, we have an OU "InfraComputers" where there are 1000s of Server's Computers Objects Under contoso, we have multiple OUs named NorthAmerica, Europe, Asia, and these OUs have multiple Sub OUs for each site. These Sub OUs have User Objects

The requirement is to Drive Maps to only these 2 Servers (let's call them ServerA & ServerB) so no matter which user logs in to these 2 Servers, they should see Network Drives mapped using GPO

Note: The Drive Maps should not happen on any other servers.

My understanding is the following:

  1. Create a GPO and link it to InfraComputers
  2. Remove Authenticated Users from Security Filtering and add these 2 Computer Objects
  3. Create the Drive Mapping with appropriate Shares Paths
  4. Since Drive Mapping is User Configuration and won't apply to Computers unless Loopback is enabled. So I plan to Enable Loopback with Merge Mode

I am trying to understand if this is a good way to implement Drive Mapping through GPO to few Servers? OR if there is a flaw in this approach? Also if there is a better way to map Drives

Testing SSH rules on a locally hosted VM

Posted: 02 Apr 2021 04:34 PM PDT

I am practicing writing iptables rules and was looking to test the ones I wrote from an external network. I am currently running on an Ubuntu VM and have the following rules written:

sudo iptables -A INPUT -p tcp --dport 22 -s 10.10.1.0/24 -j ACCEPT  sudo iptables -A INPUT -p tcp --dport 22 -j DROP  

The rules are meant to allow all SSH connections from devices on my local network and block external SSH connections. How would I go about testing that the 2nd rule works if I always access my VM from my local network?

grub-install fails to identify FS properly

Posted: 02 Apr 2021 03:30 PM PDT

I am installing Arch Linux in a Vbox and encountered the following error

grub non sens

I formatted /dev/sda in ext4 so I am not quite sure what's happening here with grub-install (v.2.04) complaining about the partition being in ext2. Could it be a side effect of /dev/sda being emulated by Virtual box (v.6.1.16)? (this tutorial mention no specificity about it)

At one point, I though that the problem might related with the architecture--target

grub-install --target=<arch> /dev/sda  

Since I could find no x86_64-pc, I am not quite sure what to check next. Any ideas?

Where to setup SSL key files for using in Docker services

Posted: 02 Apr 2021 05:06 PM PDT

On Debian servers we're supposed to store certificates on /etc/ssl/certs dir, and key files on /etc/ssl/private dir.

The problem is SSL private key files use to be readable only by the owner. So, I'm wondering what's the best practices regarding how to make it readable for Docker containers?

I mean, I have a service running on a Docker container, which needs to ready SSL cert and key files in order to expose it via HTTPS. In its default set up, I'm getting permission denied accessing /etc/ssl/private/server.key file.

To sort this out I moved this file to another directory and set it as 644. But, is that right?

Any help would be appreciated

NGINX Reverse Proxy What happens if someone browses to an IP instead of a URL

Posted: 02 Apr 2021 03:59 PM PDT

I have a nginx reverse proxy that acts as a one to many (single public IP) proxy for three other web servers.

I have all the blocks set up to redirect to each server depending on what URL is provided by the client.

What happens if the client simply puts the reverse proxy's IP address in their browser instead of an URL? How does nginx determine where to send the traffic to?

I just tried it and it seems to send the traffic to the last server that it forwarded traffic to?

How do I drop/deny traffic that does not match one of the three server blocks in my configuration (i.e. traffic that uses an IP instead of URL)?

Update: For my configuration, here is the only conf file in sites-enabled:

######## Server1 ########  server {          if ($host = server1.domain.com) {                  return 301 https://$host$request_uri;          }            listen 80;          server_name server1.domain.com;          return 404;  }  server {          listen 443 ssl; # managed by Certbot                  ssl_certificate /etc/letsencrypt/live/server1.domain.com/fullchain.pem;       # managed by Certbot                  ssl_certificate_key /etc/letsencrypt/live/server1.domain.com/privkey.pem;     # managed by Certbot                  include /etc/letsencrypt/options-ssl-nginx.conf;                                # managed by Certbot                  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;                                  # managed by Certbot            server_name server1.domain.com;            location / {                  proxy_set_header Host $host;                  proxy_set_header X-Real-IP $remote_addr;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_redirect off;                  proxy_pass_request_headers on;                  proxy_set_header X-Forwarded-Proto $scheme;                  proxy_pass https://192.168.20.2:443;          }            location ^~ /wp-login.php {                  satisfy any;                  allow 172.20.5.2;                  deny all;                    proxy_pass https://192.168.20.2:443;          }  }  ######## Server2 ########  server {          if ($host = server2.domain.com) {                  return 301 https://$host$request_uri;          }            listen 80;          server_name server2.domain.com;          return 404;  }    server {          listen 443 ssl http2;                  ssl_certificate /etc/letsencrypt/live/server2.domain.com/fullchain.pem; # managed by Certbot                  ssl_certificate_key /etc/letsencrypt/live/server2.domain.com/privkey.pem; # managed by Certbot                  include /etc/letsencrypt/options-ssl-nginx.conf;                  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;            server_name server2.domain.com;            location / {                  proxy_set_header Host $host;                  proxy_set_header X-Real-IP $remote_addr;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_redirect off;                  proxy_pass_request_headers on;                  proxy_set_header X-Forwarded-Proto $scheme;                  proxy_pass https://192.168.20.3:443;          }    }    ######## Server3 ########  server {          if ($host = server3.domain.com) {                  return 301 https://$host$request_uri;          }            listen 80;          server_name server3.domain.com;          return 404;  }  server {          listen 443 ssl http2;                  ssl_certificate /etc/letsencrypt/live/server3.domain.com/fullchain.pem; # managed by Certbot                  ssl_certificate_key /etc/letsencrypt/live/server3.domain.com/privkey.pem; # managed by Certbot                  include /etc/letsencrypt/options-ssl-nginx.conf;                   ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;             server_name server3.domain.com;            location / {                  proxy_set_header Host $host;                  proxy_set_header X-Real-IP $remote_addr;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_redirect off;                  proxy_pass_request_headers on;                  proxy_set_header X-Forwarded-Proto $scheme;                  proxy_pass https://192.168.20.4:443;          }  }  

Nginx reverse proxy IP is 192.168.20.6

So what I am seeing is if I put in just the IP into my browser, NGINX appears to go to the first server block in my conf file, which tracks with this link: https://nginx.org/en/docs/http/request_processing.html

And it does try and load server1 in my case, but since the serving of website content is based upon the URL, it sorta breaks some features of my three web servers.

Looking at that link above, I see that I can employ a block like this at the beginning to block IP only requests?

server {      listen      80;      listen      443;      server_name "";      return      444;  }  

Permission Denied in Squid Proxy Server

Posted: 02 Apr 2021 09:05 PM PDT

I have successfully installed and configured Squid Proxy Server 3.5.26 in Slackware 14.2. Im currently logged in as root user and I get an error when starting Squid using the command sudo squid -z:

WARNING: Cannot write log file: /var/log/squid/cache.log  /var/log/squid/cache.clog: Permission Denied  messages will be sent to 'stderr'    2019/04/08  16:16:20 kid1| Set Current Directory to   /var/log/squid/cache/squid  2019/04/08  16:16:20 kid1| Creating missing swap directories    FATAL: Failed to make swap directory /var/cache/squid: (13) Permission Denied  Squid Cache (Version  3.5.26): Terminated abnormally.  CPU Usage: 0.008 seconds = 0.006 user + 0.002 sys  Maximum Resident Size: 45392 KB  Page faults with physical i/o:0     Even if i made /var/log/squid permissions to 777. Still the application is unable to write cache.log.    Running :    squid -NCd1  

The command outputs:

   WARNING: Cannot write log file: /var/log/squid/cache.log     /var/log/squid/cache.log: Permission denied     messages will be sent to 'stderr'.     2019/04/08 17:26:44| Set Current Directory to /var/log/squid/cache/squid     WARNING: Cannot write log file: /var/log/squid/cache.log     /var/log/squid/cache.log: Permission denied     messages will be sent to 'stderr'.     2019/04/08 17:26:44| WARNING: Closing open FD    2     2019/04/08 17:26:44| Starting Squid Cache version 3.5.26 for x86_64-      slackware-linux-gnu...     2019/04/08 17:26:44| Service Name: squid     2019/04/08 17:26:44| Process ID 1669     2019/04/08 17:26:44| Process Roles: master worker     2019/04/08 17:26:44| With 1024 file descriptors available     2019/04/08 17:26:44| Initializing IP Cache...     2019/04/08 17:26:44| parseEtcHosts: /etc/hosts: (13) Permission denied     2019/04/08 17:26:44| DNS Socket created at [::], FD 8     2019/04/08 17:26:44| DNS Socket created at 0.0.0.0, FD 9     2019/04/08 17:26:44| /etc/resolv.conf: (13) Permission denied     2019/04/08 17:26:44| Warning: Could not find any nameservers. Trying to      use localhost     2019/04/08 17:26:44| Please check your /etc/resolv.conf file     2019/04/08 17:26:44| or use the 'dns_nameservers' option in squid.conf.     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_ACCESS_DENIED':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_ACCESS_DENIED     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_CACHE_ACCESS_DENIED': (13)      Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_CACHE_ACCESS_DENIED     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_CACHE_MGR_ACCESS_DENIED': (13)      Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_CACHE_MGR_ACCESS_DENIED     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_FORWARDING_DENIED': (13)      Permission      denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FORWARDING_DENIED     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_NO_RELAY':     (13)      Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_NO_RELAY     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_CANNOT_FORWARD': (13) Permission      denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_CANNOT_FORWARD     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_READ_TIMEOUT':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_READ_TIMEOUT     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_LIFETIME_EXP':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_LIFETIME_EXP     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_READ_ERROR':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_READ_ERROR     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_WRITE_ERROR':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_WRITE_ERROR     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_CONNECT_FAIL':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_CONNECT_FAIL     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_SECURE_CONNECT_FAIL': (13)      Permission      denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_SECURE_CONNECT_FAIL     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_SOCKET_FAILURE':      ( 13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_SOCKET_FAILURE     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_DNS_FAIL':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_DNS_FAIL     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_URN_RESOLVE':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_URN_RESOLVE     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_ONLY_IF_CACHED_MISS': (13)      Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_ONLY_IF_CACHED_MISS     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_TOO_BIG':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_TOO_BIG     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_INVALID_RESP':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_INVALID_RESP     2019 /04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_UNSUP_HTTPVERSION': (13)      Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_UNSUP_HTTPVERSION     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_INVALID_REQ':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_INVALID_REQ     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_UNSUP_REQ':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_UNSUP_REQ     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_INVALID_URL':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_INVALID_URL     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_ZERO_SIZE_OBJECT': (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_ZERO_SIZE_OBJECT     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_PRECONDITION_FAILED': (13)      Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_PRECONDITION_FAILED     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_CONFLICT_HOST':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_CONFLICT_HOST     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_FTP_DISABLED':      (13) Permission denied       2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_DISABLED     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_FTP_UNAVAILABLE': (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_UNAVAILABLE     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_FTP_FAILURE':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_FAILURE     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_FTP_PUT_ERROR':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_PUT_ERROR     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_FTP_NOT_FOUND':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_NOT_FOUND     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_FTP_FORBIDDEN':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_FORBIDDEN     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_FTP_PUT_CREATED': (13) Permission      denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_PUT_CREATED     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_FTP_PUT_MODIFIED': (13) Permission      denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_FTP_PUT_MODIFIED     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_ESI': (13)      Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_ESI     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_ICAP_FAILURE':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_ICAP_FAILURE     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_GATEWAY_FAILURE': (13) Permission      denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_GATEWAY_FAILURE     2019/04/08 17:26:44| '/usr/share/squid/errors/templates/ERR_DIR_LISTING':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_DIR_LISTING     2019/04/08 17:26:44|      '/usr/share/squid/errors/templates/ERR_SHUTTING_DOWN':      (13) Permission denied     2019/04/08 17:26:44| WARNING: failed to find or read error text file      ERR_SHUTTING_DOWN     2019/04/08 17:26:44| Logfile: opening log      daemon:/var/log/squid/access.log     2019/04/08 17:26:44| Logfile Daemon: opening log      /var/log/squid/access.log     2019/04/08 17:26:44| ipcCreate: /usr/libexec/log_file_daemon: (22)      Invalid argument     2019/04/08 17:26:44| Unlinkd pipe opened on FD 15     2019/04/08 17:26:44| ipcCreate: /usr/libexec/unlinkd: (22) Invalid      argument     2019/04/08 17:26:44| Store logging disabled     2019/04/08 17:26:44| Swap maxSize 262144 + 262144 KB, estimated 40329      objects     2019/04/08 17:26:44| Target number of buckets: 2016     2019/04/08 17:26:44| Using 8192 Store buckets     2019/04/08 17:26:44| Max Mem  size: 262144 KB     2019/04/08 17:26:44| Max Swap size: 262144 KB     2019/04/08 17:26:44| ERROR: /var/cache/squid: (13) Permission denied     FATAL:     Failed to verify one of the swap directories, Check cache.log     for details.  Run 'squid -z' to create swap directories     if needed, or if running Squid for the first time.     Squid Cache (Version 3.5.26): Terminated abnormally.     CPU Usage: 0.023 seconds = 0.016 user + 0.007 sys      Maximum Resident Size: 58800 KB     Page faults with physical i/o: 0  

Create Google Cloud Managed SSL Certificate for a subdomain

Posted: 02 Apr 2021 06:00 PM PDT

I have my main domain www.example.com hosted on Route 53 on AWS.

I've created the custom domain on Google Cloud sub.example.com and set the appropriate NS records.

What I want to do now is create a new managed SSL certificate for this subdomain as shown below:

enter image description here

Is this possible? Is it good practice given that I want to continue adding more subdomains like sub1.example.com and creating a certificate for each one? Since I am keeping example.com hosted at Route 53, I don't think I can create a single managed SSL certificate for all of the possible subdomains that I may have on Google Cloud?

How to understand dnsmasq logs?

Posted: 02 Apr 2021 08:05 PM PDT

I have dnsmasq installed on server and it is heavily used.I have following dnsmasq log in the /var/log/syslog when given the signal "SIGUSR1"

Jul 16 13:45:50 server1 dnsmasq[427008]: time 1531748750  Jul 16 13:45:50 server1 dnsmasq[427008]: cache size 10000, 18355/597070 cache insertions re-used unexpired cache entries.  Jul 16 13:45:50 server1 dnsmasq[427008]: queries forwarded 1510313, queries answered locally 15347110  Jul 16 13:45:50 server1 dnsmasq[427008]: queries for authoritative zones 0  Jul 16 13:45:50 server1 dnsmasq[427008]: server 100.1.2.3#53: queries sent 242802, retried or failed 0  Jul 16 13:45:50 server1 dnsmasq[427008]: server 100.2.3.4#53: queries sent 1371704, retried or failed 0  

I have set cache size to 10000 which is max allowed value. The application which uses dnsmasq for request also feels slow. How to understand the dnsmasq log and know if these logs represent performance issue due to low cache size ?

Exchange allows anonymous internal relay by default, is that best practice?

Posted: 02 Apr 2021 10:01 PM PDT

It is surprising how many customers I see that make a specific receive connector for certain remote (internal network) IP addresses to allow anonymous internal relay.

It became surprising to me (and to them) after learning that Exchange allows anonymous relay internally by default, effectively making that additional receive connector totally superfluous.

This has been the default behavior since at least Exchange 2010 as far as I can see.

Now I'm wondering:

  • Is it really so fine/secure to allow anonymous relay internally by default (security is the reason why customers create a separate connector in the first place; so they can limit this to only a few internal devices/applications)?

  • What is the best practice? Should internal relay be limited in some way and if so, how? Normally it's a best practice not to modify the default connectors.

Data Transfer Pauses on LSI 9271 RAID Controller

Posted: 02 Apr 2021 05:15 PM PDT

I have a server equipped with a LSI 9271-8i RAID controller, with 4 x 4TB organized as RAID-5 and 1 x 8TB as JBOD (which is called RAID-0 in the controller).

When I copy bigger amounts of data (~1 TB), I can observe the following: for the first few gigabytes the transfer speed is fine and limited by the disk or network speeds, usually ~100MB/s. But after a while, the transfer completely pauses for approx. 20-30 seconds, and continues then with the next approx. 1 GB. I copy a lot of files with each between 10MB and 500MB, and during the pause robocopy stays at a file and continues to the next after the pause. That way the overall transfer rate drops to ~20MB/s.

During the pause, browsing the drives' files is not possible, and in one case I received an controller reset error message ("Controller encountered a fatal error and was reset"). Also accessing controller data with the CLI tool is not possible during that pause (the result is displayed when the pause is over).

I could observe this behaviour when copying

  • gigabit network to RAID-5 volume
  • gigabit network to JBOD volume
  • JBOD to RAID-5
  • RAID-5 to JBOD

There is nothing going on that looks suspicious to me: temperatures (disks, BBU) are within the valid range, controller temp seems a bit high, but also within specs. No checks are running on the RAID, no rebuild in progress.

Any guesses?

Before I replace the controller, I want to try optimizing the thermal situation. Does this behaviour sound like a possibly thermal issue?

I find it strange that the first 20-30 GB are working fine, and the pauses are not ocurring before that. If I leave the server alone for a while and retry, then again a few GBs are copied fine. The only naive explanation for me is that the controller gets too hot. Why the controller and not the disks? The RAID-5 disks are 7200rpm and stacked very closely, while the JBOD single disk is 5400rpm and with a lot of air around. Would be strange if both would show the same overheating symptoms.

Tomcat ajp13 connector returns 404 for all requests

Posted: 02 Apr 2021 05:02 PM PDT

I'm sure the answer is simple, but I'm tearing my hair out. I'm rebuilding an apache httpd + tomcat site on a new Ubuntu 16 system with Apache 2.4 and Tomcat 8.5.23. I have tomcat running and the ajp13 connector appears to be running, but requests to http://abilities.tld/ give 404. The 404 shows up in the mod_jk.log as originating from tomcat, so it's not a simple httpd 404.

Abbreviated vhost config

<VirtualHost 1.1.1.1:80>      DocumentRoot "/home/user/subdir/html"      ServerName abilities.tld      ServerAlias www.abilities.tld      <Directory "/home/user/subdir/html">              allow from all              Options ExecCGI Includes FollowSymLinks              AllowOverride All              Require all granted      </Directory>        SuexecUserGroup user user        JkMount /* ajp13      #Directives to enable Apache to continue serving applications dependent on it.      SetEnvIf Request_URI "/webmail*" no-jk      SetEnvIf Request_URI "/mail*" no-jk      SetEnvIf Request_URI "/awstats*" no-jk      SetEnvIf Request_URI "/myadmin*" no-jk      SetEnvIf Request_URI "/pgadmin*" no-jk      SetEnvIf Request_URI "/cgi-bin*" no-jk        #DirectoryIndex /abilities  

`

Apache vhost log 2.2.2.2 - - [16/Nov/2017:14:08:18 +1100] "GET / HTTP/1.1" 404 164

Visiting http://1.1.1.1:8080/host-manager/html works fine and shows the virtual hosts

The catalina log shows the startup deployment of WARs as expected.

In server.xml

     <Host appBase="/home/user/subdir/html" name="abilities.tld" unpackWARs="true" autoDeploy="true">              <Alias>www.abilities.tld</Alias>              <Context path="" docBase="abilities" debug="0" reloadable="true"/>              <Logger className="org.apache.catalina.logger.FileLogger" prefix="abilities.tld" suffix=".txt" timestamp="true"/>        </Host>  

and in /var/log/apache2/mod_jk.log

[Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] map_uri_to_worker_ext::jk_uri_worker_map.c (1179): Attempting to map URI '/' from 1 maps  [Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] find_match::jk_uri_worker_map.c (978): Attempting to map context URI '/*=ajp13' source 'JkMount'  [Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] find_match::jk_uri_worker_map.c (991): Found a wildchar match '/*=ajp13'  [Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] jk_handler::mod_jk.c (2823): Into handler jakarta-servlet worker=ajp13 r->proxyreq=0  [Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] wc_get_worker_for_name::jk_worker.c (120): found a worker ajp13  [Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] wc_get_name_for_type::jk_worker.c (304): Found worker type 'ajp13'  [Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] init_ws_service::mod_jk.c (1196): Service protocol=HTTP/1.1 method=GET ssl=false host=(null) addr=1.178.147.184 name=abilities.tld port=80 auth=(null) user=(null) laddr=103.18.42.17 raddr=1.178.147.184 uaddr=1.178.147.184 uri=/  [Thu Nov 16 14:46:53.105 2017] [26722:140421309081472] [debug] ajp_get_endpoint::jk_ajp_common.c (3351): (ajp13) acquired connection pool slot=0 after 0 retries  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] ajp_marshal_into_msgb::jk_ajp_common.c (684): (ajp13) ajp marshaling done  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] ajp_service::jk_ajp_common.c (2586): processing ajp13 with 2 retries  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] jk_shutdown_socket::jk_connect.c (932): About to shutdown socket 37         [127.0.0.1:50968 -> 127.0.0.1:8009]  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] jk_is_input_event::jk_connect.c (1406): error event during poll on socket 37 [errno=107] (event=24)  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] jk_shutdown_socket::jk_connect.c (1016): Shutdown socket 37 [127.0.0.1:50968 -> 127.0.0.1:8009] and read 0 lingering bytes in 0 sec.  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] ajp_send_request::jk_ajp_common.c (1639): (ajp13) failed sending request, socket 37 is not connected any more (errno=0)  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] ajp_abort_endpoint::jk_ajp_common.c (821): (ajp13) aborting endpoint with socket 37  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] ajp_send_request::jk_ajp_common.c (1717): (ajp13) no usable connection found, will create a new one, detected by connect check (1), cping (0), send (0).  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] jk_open_socket::jk_connect.c (675): socket TCP_NODELAY set to On  [Thu Nov 16 14:46:53.106 2017] [26722:140421309081472] [debug] jk_open_socket::jk_connect.c (799): trying to connect socket 37 to 127.0.0.1:8009  [Thu Nov 16 14:46:53.107 2017] [26722:140421309081472] [debug] jk_open_socket::jk_connect.c (825): socket 37 [127.0.0.1:58717 -> 127.0.0.1:8009] connected  [Thu Nov 16 14:46:53.107 2017] [26722:140421309081472] [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1267): sending to ajp13 pos=4 len=704 max=8192  ......  [Thu Nov 16 14:46:53.172 2017] [26722:140421309081472] [debug] ajp_unmarshal_response::jk_ajp_common.c (739): (ajp13) status = 404  

Where else can I look to figure out why tomcat is giving 404?

Thanks

Segfault error 4 in libmysqlclient.so

Posted: 02 Apr 2021 07:01 PM PDT

Since a few days, I see this kind of messages in my syslog:

Sep 23 14:28:42 server kernel: [138926.637593] php5-fpm[6455]: segfault at 7f9ade735018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:28:44 server kernel: [138928.314016] php5-fpm[22742]: segfault at 7f9ade3db018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:32:11 server kernel: [139135.318287] php5-fpm[16887]: segfault at 7f9ade4b3018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:32:49 server kernel: [139173.050377] php5-fpm[668]: segfault at 7f9ade61a018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:33:19 server kernel: [139203.396935] php5-fpm[26277]: segfault at 7f9ade6c0018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:35:06 server kernel: [139310.048740] php5-fpm[27017]: segfault at 7f9ade46c018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:35:19 server kernel: [139323.494188] php5-fpm[31263]: segfault at 7f9ade5e2018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:36:10 server kernel: [139374.904308] php5-fpm[26422]: segfault at 7f9ade6cf018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]  Sep 23 14:37:25 server kernel: [139449.360384] php5-fpm[20806]: segfault at 7f9ade644018 ip 00007f9ae4026772 sp 00007ffd69b4fad0 error 4 in libmysqlclient.so.18.0.0[7f9ae3ff9000+2f1000]   

I'm using debian 8 and MariaDB. In the beginning it was only once every 2 or 3 hours, but now it's several times per hours. After some research, I understand as it should be an memory problem, but I didn't find any solution to solve it.

This is what I see in mysqltuner:

-------- Storage Engine Statistics -------------------------------------------  [--] Status: +ARCHIVE +Aria +BLACKHOLE +CSV +FEDERATED +InnoDB +MRG_MyISAM   [--] Data in InnoDB tables: 2G (Tables: 79)  [--] Data in MyISAM tables: 96M (Tables: 146)  [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)  [!!] Total fragmented tables: 34    -------- Security Recommendations  -------------------------------------------  [OK] All database users have passwords assigned    -------- Performance Metrics -------------------------------------------------  [--] Up for: 1d 16h 44m 38s (73M q [502.853 qps], 196K conn, TX: 572B, RX: 14B)  [--] Reads / Writes: 97% / 3%  [--] Total buffers: 17.3G global + 56.2M per thread (500 max threads)  [!!] Maximum possible memory usage: 44.8G (142% of installed RAM)  [OK] Slow queries: 0% (2K/73M)  [OK] Highest usage of available connections: 28% (141/500)  [OK] Key buffer size / total MyISAM indexes: 1.0G/32.6M  [OK] Key buffer hit rate: 100.0% (132M cached / 53K reads)  [OK] Query cache efficiency: 44.9% (50M cached / 113M selects)  [!!] Query cache prunes per day: 260596  [OK] Sorts requiring temporary tables: 0% (2K temp sorts / 2M sorts)  [OK] Temporary tables created on disk: 21% (6K on disk / 28K total)  [OK] Thread cache hit rate: 99% (141 created / 196K connections)  [OK] Table cache hit rate: 72% (500 open / 692 opened)  [OK] Open file limit used: 17% (429/2K)  [OK] Table locks acquired immediately: 99% (25M immediate / 25M locks)  [OK] InnoDB buffer pool / data size: 16.0G/2.4G  [!!] InnoDB log waits: 30  

So the maximum memory to use is too high, but I ajusted my innodb buffer pool size to 16Go, for 32Go RAM that should be ok, I don't know what to do for optimize this.

The thing is, my memory general usage in the server is always under 89% (plus for caching). MySQL is actually using 50,6% of RAM. I don't know if there is link between all of this, but I prefere to put it here. Otherwise, everything seems to be ok in MySQL side...

Finally this the principal variables in my.cnf I adjusted which could have an effect on this:

max_connections = 100  max_heap_table_size = 64M  read_buffer_size = 4M  read_rnd_buffer_size = 32M  sort_buffer_size = 8M  query_cache_size = 256M  query_cache_limit = 4M  query_cache_type = 1  query_cache_strip_comments =1  thread_stack = 192K  transaction_isolation = READ-COMMITTED  tmp_table_size = 64M  nnodb_additional_mem_pool_size = 16M  innodb_buffer_pool_size = 16G  thread_cache_size   = 4M  max_connections     = 500  join_buffer_size    = 12M  interactive_timeout = 30  wait_timeout        = 30  open_files_limit    = 800  innodb_file_per_table  key_buffer_size     = 1G  table_open_cache    = 500  innodb_log_file_size    = 256M  

Two days ago the server crashed for no reason in syslog execpt the segfault. Can segfault crash the system ? Any Idea for the segfault reason ? A few ways to understand the origin of the problem ?

Cant add more servers to Galera Cluster --> [ERROR] /usr/sbin/mysqld: unknown option '--.'

Posted: 02 Apr 2021 05:02 PM PDT

Ive tried to setup a MariaDB cluster with Galera on my Debian 8 server.

When I try to add servers to the one that already exist in the cluster with:

systemctl start mysql   

I get this error:

Job for mariadb.service failed. See 'systemctl status mariadb.service' and 'journalctl -xn' for details.

I found the line in the journalctl -xn output:

Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.'    

So I opened /etc/init.d/mysql with nano, but couldn't find -- anywhere.

Does anyone know how to solve this?


Here is the result of systemctl status mariadb.service:

● mariadb.service - MariaDB database server     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled)    Drop-In: /etc/systemd/system/mariadb.service.d             └─migrated-from-my.cnf-settings.conf     Active: failed (Result: exit-code) since Mi 2016-10-12 09:35:29 CEST; 4min 36s ago    Process: 5272 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= ||   VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ]   && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=1/FAILURE)    Process: 5267 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)    Process: 5266 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)   Main PID: 4281 (code=exited, status=0/SUCCESS)    Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB: 128 rollback segment(s) are active.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB: Waiting for purge to start  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.32-78.1 started; log sequence number 1616869  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Warning] InnoDB: Skipping buffer pool dump/restore during wsrep recovery.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] Plugin 'FEEDBACK' is disabled.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.'  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [ERROR] Aborting'  Okt 12 09:35:29 node2 systemd[1]: mariadb.service: control process exited, code=exited status=1  Okt 12 09:35:29 node2 systemd[1]: Failed to start MariaDB database server.  Okt 12 09:35:29 node2 systemd[1]: Unit mariadb.service entered failed state.  

Here is the result of journalctl -xn:

-- Logs begin at Di 2016-10-11 16:10:14 CEST, end at Mi 2016-10-12 09:35:29 CEST. --  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB: 128 rollback segment(s) are active.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB: Waiting for purge to start  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Warning] InnoDB: Skipping buffer pool dump/restore during  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] Plugin 'FEEDBACK' is disabled.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.'  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [ERROR] Aborting'  Okt 12 09:35:29 node2 systemd[1]: mariadb.service: control process exited, code=exited status=1  Okt 12 09:35:29 node2 systemd[1]: Failed to start MariaDB database server.  -- Subject: Unit mariadb.service has failed  -- Defined-By: systemd  -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel  --  -- Unit mariadb.service has failed.  --  -- The result is failed.  Okt 12 09:35:29 node2 systemd[1]: Unit mariadb.service entered failed state.  lines 1-18/18 (END)  -- Logs begin at Di 2016-10-11 16:10:14 CEST, end at Mi 2016-10-12 09:35:29 CEST. --  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB: 128 rollback segment(s) are active.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB: Waiting for purge to start  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.32-78.1 starte  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Warning] InnoDB: Skipping buffer pool dump/restore during wsrep recovery.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [Note] Plugin 'FEEDBACK' is disabled.  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [ERROR] /usr/sbin/mysqld: unknown option '--.'  Okt 12 09:35:29 node2 sh[5272]: 2016-10-12  9:35:27 3045435776 [ERROR] Aborting'  Okt 12 09:35:29 node2 systemd[1]: mariadb.service: control process exited, code=exited status=1  Okt 12 09:35:29 node2 systemd[1]: Failed to start MariaDB database server.  -- Subject: Unit mariadb.service has failed  -- Defined-By: systemd  -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel  --  -- Unit mariadb.service has failed.  --  -- The result is failed.  Okt 12 09:35:29 node2 systemd[1]: Unit mariadb.service entered failed state.  

The: /etc/init.d/mysql

#                    and its wrapper script "mysqld_safe".  ### END INIT INFO  #  set -e  set -u  ${DEBIAN_SCRIPT_DEBUG:+ set -v -x}    test -x /usr/sbin/mysqld || exit 0    . /lib/lsb/init-functions    SELF=$(cd $(dirname $0); pwd -P)/$(basename $0)  CONF=/etc/mysql/my.cnf  MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"    # priority can be overriden and "-s" adds output to stderr  ERR_LOGGER="logger -p daemon.err -t /etc/init.d/mysql -i"    # Safeguard (relative paths, core dumps..)  cd /  umask 077    # mysqladmin likes to read /root/.my.cnf. This is usually not what I want  # as many admins e.g. only store a password without a username there and  # so break my scripts.  export HOME=/etc/mysql/    # Source default config file.  [ -r /etc/default/mariadb ] && . /etc/default/mariadb    ## Fetch a particular option from mysql's invocation.  #  # Usage: void mysqld_get_param option  mysqld_get_param() {          /usr/sbin/mysqld --print-defaults \                  | tr " " "\n" \                  | grep -- "--$1" \                  | tail -n 1 \                  | cut -d= -f2  }    ## Do some sanity checks before even trying to start mysqld.  sanity_checks() {    # check for config file    if [ ! -r /etc/mysql/my.cnf ]; then      log_warning_msg "$0: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz"      echo                "WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz" | $ERR_LOGGER    fi      # check for diskspace shortage    datadir=`mysqld_get_param datadir`    if LC_ALL=C BLOCKSIZE= df --portability $datadir/. | tail -n 1 | awk '{ exit ($4>4096) }'; then      log_failure_msg "$0: ERROR: The partition with $datadir is too full!"      echo                "ERROR: The partition with $datadir is too full!" | $ERR_LOGGER      exit 1    fi    }    ## Checks if there is a server running and if so if it is accessible.  #  # check_alive insists on a pingable server  # check_dead also fails if there is a lost mysqld in the process list  #  # Usage: boolean mysqld_status [check_alive|check_dead] [warn|nowarn]  mysqld_status () {      ping_output=`$MYADMIN ping 2>&1`; ping_alive=$(( ! $? ))        ps_alive=0      pidfile=`mysqld_get_param pid-file`      if [ -f "$pidfile" ] && ps `cat $pidfile` >/dev/null 2>&1; then ps_alive=1; fi        if [ "$1" = "check_alive"  -a  $ping_alive = 1 ] ||         [ "$1" = "check_dead"   -a  $ping_alive = 0  -a  $ps_alive = 0 ]; then          return 0 # EXIT_SUCCESS      else          if [ "$2" = "warn" ]; then              echo -e "$ps_alive processes alive and '$MYADMIN ping' resulted in\n$ping_output\n" | $ERR_LOGGER -p daemon.debug          fi          return 1 # EXIT_FAILURE      fi  }    #  # main()  #    case "${1:-''}" in    'start')          sanity_checks;          # Start daemon          log_daemon_msg "Starting MariaDB database server" "mysqld"          if mysqld_status check_alive nowarn; then             log_progress_msg "already running"             log_end_msg 0          else              # Could be removed during boot              test -e /var/run/mysqld || install -m 755 -o mysql -g root -d /var/run/mysqld                # Start MariaDB!              /usr/bin/mysqld_safe "${@:2}" > /dev/null 2>&1 &                # 6s was reported in #352070 to be too little              for i in $(seq 1 "${MYSQLD_STARTUP_TIMEOUT:-60}"); do                  sleep 1                  if mysqld_status check_alive nowarn ; then break; fi                  log_progress_msg "."              done              if mysqld_status check_alive warn; then                  log_end_msg 0                  # Now start mysqlcheck or whatever the admin wants.                  output=$(/etc/mysql/debian-start)                  [ -n "$output" ] && log_action_msg "$output"              else                  log_end_msg 1   log_failure_msg "Please take a look at the syslog"              fi          fi          ;;      'stop')          # * As a passwordless mysqladmin (e.g. via ~/.my.cnf) must be possible          # at least for cron, we can rely on it here, too. (although we have          # to specify it explicit as e.g. sudo environments points to the normal          # users home and not /root)          log_daemon_msg "Stopping MariaDB database server" "mysqld"          if ! mysqld_status check_dead nowarn; then            set +e            shutdown_out=`$MYADMIN shutdown 2>&1`; r=$?            set -e            if [ "$r" -ne 0 ]; then              log_end_msg 1              [ "$VERBOSE" != "no" ] && log_failure_msg "Error: $shutdown_out"              log_daemon_msg "Killing MariaDB database server by signal" "mysqld"              killall -15 mysqld              server_down=              for i in `seq 1 600`; do                sleep 1                if mysqld_status check_dead nowarn; then server_down=1; break; fi              done            if test -z "$server_down"; then killall -9 mysqld; fi            fi          fi     if ! mysqld_status check_dead warn; then            log_end_msg 1            log_failure_msg "Please stop MariaDB manually and read /usr/share/doc/mariadb-server-10.1/README.Debian.gz!"            exit -1          else            log_end_msg 0          fi          ;;      'restart')          set +e; $SELF stop; set -e          $SELF start          ;;      'reload'|'force-reload')          log_daemon_msg "Reloading MariaDB database server" "mysqld"          $MYADMIN reload          log_end_msg 0          ;;      'status')          if mysqld_status check_alive nowarn; then            log_action_msg "$($MYADMIN version)"          else            log_action_msg "MariaDB is stopped."            exit 3          fi          ;;    'bootstrap')          # Bootstrap the cluster, start the first node          # that initiates the cluster          log_daemon_msg "Bootstrapping the cluster" "mysqld"          $SELF start "${@:2}" --wsrep-new-cluster          ;;      *)          echo "Usage: $SELF start|stop|restart|reload|force-reload|status|bootstrap"          exit 1          ;;  esac  

Here is the my.cnf:

# MariaDB database server configuration file.  #  # You can copy this file to one of:  # - "/etc/mysql/my.cnf" to set global options,  # - "~/.my.cnf" to set user-specific options.  #  # One can use all long options that the program supports.  # Run program with --help to get a list of available options and with  # --print-defaults to see which it would actually understand and use.  #  # For explanations see  # http://dev.mysql.com/doc/mysql/en/server-system-variables.html    # This will be passed to all mysql clients  # It has been reported that passwords should be enclosed with ticks/quotes  # escpecially if they contain "#" chars...  # Remember to edit /etc/mysql/debian.cnf when changing the socket location.  [client]  port            = 3306  socket          = /var/run/mysqld/mysqld.sock    # Here is entries for some specific programs  # The following values assume you have at least 32M ram    # This was formally known as [safe_mysqld]. Both versions are currently parsed.  [mysqld_safe]  socket          = /var/run/mysqld/mysqld.sock  nice            = 0    [mysqld]  #  # * Basic Settings  #  user            = mysql  pid-file        = /var/run/mysqld/mysqld.pid  socket          = /var/run/mysqld/mysqld.sock  port            = 3306  basedir         = /usr  datadir         = /var/lib/mysql  tmpdir          = /tmp  lc_messages_dir = /usr/share/mysql  lc_messages     = en_US  skip-external-locking  #  # Instead of skip-networking the default is now to listen only on  # localhost which is more compatible and is not less secure.  bind-address            = 127.0.0.1  #  # * Fine Tuning  #  max_connections         = 100  connect_timeout         = 5  wait_timeout            = 600  max_allowed_packet      = 16M  thread_cache_size       = 128  sort_buffer_size        = 4M  bulk_insert_buffer_size = 16M  tmp_table_size          = 32M  max_heap_table_size     = 32M  #  # * MyISAM  #  # This replaces the startup script and checks MyISAM tables if needed  # the first time they are touched. On error, make copy and try a repair.  myisam_recover_options = BACKUP  key_buffer_size         = 128M  #open-files-limit       = 2000  table_open_cache        = 400  myisam_sort_buffer_size = 512M  concurrent_insert       = 2  read_buffer_size        = 2M  read_rnd_buffer_size    = 1M  #  # * Query Cache Configuration  #  # Cache only tiny result sets, so we can fit more in the query cache.  query_cache_limit               = 128K  query_cache_size                = 64M  # for more write intensive setups, set to DEMAND or OFF  #query_cache_type               = DEMAND  #  # * Logging and Replication  #  # Both location gets rotated by the cronjob.  # Be aware that this log type is a performance killer.  # As of 5.1 you can enable the log at runtime!  #general_log_file        = /var/log/mysql/mysql.log  #general_log             = 1  #  # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.  #  # we do want to know about network errors and such  log_warnings            = 2  #  # Enable the slow query log to see queries with especially long duration  #slow_query_log[={0|1}]  slow_query_log_file     = /var/log/mysql/mariadb-slow.log  long_query_time = 10  #log_slow_rate_limit    = 1000  log_slow_verbosity      = query_plan    #log-queries-not-using-indexes  #log_slow_admin_statements  #  # The following can be used as easy to replay backup logs or for replication.  # note: if you are setting up a replication slave, see README.Debian about  #       other settings you may need to change.  #server-id              = 1  #report_host            = master1  #auto_increment_increment = 2  #auto_increment_offset  = 1  log_bin                 = /var/log/mysql/mariadb-bin  log_bin_index           = /var/log/mysql/mariadb-bin.index  # not fab for performance, but safer  #sync_binlog            = 1  expire_logs_days        = 10  max_binlog_size         = 100M  # slaves  #relay_log              = /var/log/mysql/relay-bin  #relay_log_index        = /var/log/mysql/relay-bin.index  #relay_log_info_file    = /var/log/mysql/relay-bin.info  #log_slave_updates  #read_only  #  # If applications support it, this stricter sql_mode prevents some  # mistakes like inserting invalid dates etc.  #sql_mode               = NO_ENGINE_SUBSTITUTION,TRADITIONAL  #  # * InnoDB  #  # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.  # Read the manual for more InnoDB related options. There are many!  default_storage_engine  = InnoDB  # you can't just change log file size, requires special procedure  #innodb_log_file_size   = 50M  innodb_buffer_pool_size = 256M  innodb_log_buffer_size  = 8M  innodb_file_per_table   = 1  innodb_open_files       = 400  innodb_io_capacity      = 400  innodb_flush_method     = O_DIRECT  #  # * Security Features  #  # Read the manual, too, if you want chroot!  # chroot = /var/lib/mysql/  #  # For generating SSL certificates I recommend the OpenSSL GUI "tinyca".  #  # ssl-ca=/etc/mysql/cacert.pem  # ssl-cert=/etc/mysql/server-cert.pem  # ssl-key=/etc/mysql/server-key.pem    #  # * Galera-related settings  #  [galera]  # Mandatory settings  #wsrep_on=ON  #wsrep_provider=  #wsrep_cluster_address=  #binlog_format=row  #default_storage_engine=InnoDB  #innodb_autoinc_lock_mode=2  #  # Allow server to accept connections on all interfaces.  #  #bind-address=0.0.0.0  #  # Optional setting  #wsrep_slave_threads=1  #innodb_flush_log_at_trx_commit=0      [mysqldump]  quick  quote-names  max_allowed_packet      = 16M    [mysql]  #no-auto-rehash # faster start of mysql but no tab completion    [isamchk]  key_buffer              = 16M    #  # * IMPORTANT: Additional settings that can override those from this file!  #   The files must end with '.cnf', otherwise they'll be ignored.  #  !includedir /etc/mysql/conf.d/  

The custom config I used and named galera.cnf:

[mysqld]  .#mysql settings  binlog_format=ROW  default-storage-engine=innodb  innodb_autoinc_lock_mode=2  innodb_doublewrite=1  query_cache_size=0  query_cache_type=0  bind-address=0.0.0.0    #galera settings  wsrep_on=ON  wsrep_provider=/usr/lib/galera/libgalera_smm.so  wsrep_cluster_name="skyfillers_cluster"  wsrep_cluster_address=gcomm://192.168.2.162,192.168.2.164,192.168.2.163  wsrep_sst_method=rsync  

Which OS is running in my Docker container?

Posted: 02 Apr 2021 03:26 PM PDT

Most of the time, using one of these two, I can tell which OS is running in my Docker container (alpine, centOS, etc)

But this time, I can't tell:

bash-4.2$ uname -a         Linux 6fe5c6d1451c 2.6.32-504.23.4.el6.x86_64 #1 SMP Tue Jun 9 20:57:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux    bash-4.2$ more /etc/issue  \S  Kernel \r on an \m  

Any way to get a text version of the OS it is running ?

Windows Task Scheduler don't show execution errors

Posted: 02 Apr 2021 10:01 PM PDT

I'm trying to make a powerscript that will run from time to time through the windows task scheduler.

My problem is that the task scheduler do not display the task as failed, even though my script do fail.

This is the event that i get:

Level   Date and Time   Event ID    Task Category   Operational Code    Correlation Id  Information 15.03.2016 22:53:06 102 Task completed  (2) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler successfully finished ""{F03232D8-4196-4425-88A9-722028F9700A}"" instance of the ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" task for user ""VEGAR-M4800\vegar""."  Information 15.03.2016 22:53:06 201 Action completed    (2) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler successfully completed task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" , instance ""{F03232D8-4196-4425-88A9-722028F9700A}"" , action ""StartPowerShellJob"" with return code 0."  Information 15.03.2016 22:53:03 200 Action started  (1) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler launched action ""StartPowerShellJob"" in instance ""{F03232D8-4196-4425-88A9-722028F9700A}"" of task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task""."  Information 15.03.2016 22:53:03 100 Task Started    (1) f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler started ""{F03232D8-4196-4425-88A9-722028F9700A}"" instance of the ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" task for user ""VEGAR-M4800\vegar""."  Information 15.03.2016 22:53:03 129 Created Task Process    Info    00000000-0000-0000-0000-000000000000    "Task Scheduler launch task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" , instance ""powershell.exe""  with process ID 12780."  Information 15.03.2016 22:53:03 107 Task triggered on scheduler Info    f03232d8-4196-4425-88a9-722028f9700a    "Task Scheduler launched ""{F03232D8-4196-4425-88A9-722028F9700A}""  instance of task ""\Microsoft\Windows\PowerShell\ScheduledJobs\My Scheduled Task"" due to a time trigger condition."  

The Result.xml shows another story:

    <Results_Error z:Assembly="0" z:Id="61" z:Type="System.Collections.ObjectModel.Collection`1[[System.Management.Automation.ErrorRecord, System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]">    <items xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation" z:Assembly="0" z:Id="62" z:Type="System.Collections.Generic.List`1[[System.Management.Automation.ErrorRecord, System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]">      <_items z:Id="63" z:Size="4">        <ErrorRecord z:Assembly="System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" z:Id="64" z:Type="System.Management.Automation.Runspaces.RemotingErrorRecord">          <CliXml xmlns="" z:Assembly="0" z:Id="65" z:Type="System.String">&lt;Objs Version=&quot;1.1.0.1&quot; xmlns=&quot;http://schemas.microsoft.com/powershell/2004/04&quot;&gt;    &lt;Obj RefId=&quot;0&quot;&gt;    &lt;ToString&gt;@{Exception=System.Management.Automation.RemoteException: Could not find a part of the path 'C:\source\flis\'.; TargetObject=; FullyQualifiedErrorId=FileOpenFailure,Microsoft.PowerShell.Commands.OutFileCommand; InvocationInfo=; ErrorCategory_Category=1; ErrorCategory_Activity=Out-File; ErrorCategory_Reason=DirectoryNotFoundException; ErrorCategory_TargetName=; ErrorCategory_TargetType=; ErrorCategory_Message=OpenError: (:) [Out-File], DirectoryNotFoundException; SerializeExtendedInfo=False; ErrorDetails_ScriptStackTrace=at &amp;lt;ScriptBlock&amp;gt;, &amp;lt;No file&amp;gt;: line 1_x000D__x000A_at Do-Backup, &amp;lt;No file&amp;gt;: line 66_x000D__x000A_at &amp;lt;ScriptBlock&amp;gt;, &amp;lt;No file&amp;gt;: line 83}&lt;/ToString&gt;  ....  </CliXml>          <RemoteErrorRecord_OriginInfo xmlns="" z:Assembly="System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" z:Id="66" z:Type="System.Management.Automation.Remoting.OriginInfo">            <_computerName xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation.Remoting" z:Id="67">localhost</_computerName>            <_instanceId xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation.Remoting">00000000-0000-0000-0000-000000000000</_instanceId>            <_runspaceID xmlns="http://schemas.datacontract.org/2004/07/System.Management.Automation.Remoting">975a8cbe-dbd1-43c0-80b4-19c282eee381</_runspaceID>          </RemoteErrorRecord_OriginInfo>        </ErrorRecord>        <ErrorRecord i:nil="true"/>        <ErrorRecord i:nil="true"/>        <ErrorRecord i:nil="true"/>      </_items>      <_size>1</_size>      <_version>1</_version>    </items>  </Results_Error>  

I have created the task it self by invoking the following inside a scipt:

function Create-Task  {      Param($name, $description)        $trigger = New-JobTrigger -Daily -At 1am      $options = New-ScheduledJobOption -WakeToRun -RequireNetwork        Register-ScheduledJob -Name $name -FilePath $MyInvocation.PSCommandPath -ArgumentList "Run", $MyInvocation.PSScriptRoot -Trigger $trigger -ScheduledJobOption $options  }  

Any idea why I don't get a nice red error ?

DKIM signature and hmailserver - Your DKIM signature is not valid

Posted: 02 Apr 2021 07:01 PM PDT

I'm trying to get DKIM working on a domain using hmailserver as the mailserver, but I keep on getting Your DKIM signature is not valid

This is from www.mail-tester.com

The DKIM signature of your message is:    v=1;  a=rsa-sha256;  d=twitterautopost.co.uk;  s=key1;  c=simple/simple;  q=dns/txt;  h=From:Subject:Date:Message-ID:To:MIME-Version:Content-Type:Content-Transfer-Encoding;  bh=UmH1WrduLl3abcMsoAbrWmKA9BGd7W0M4NcX7EZqo/s=;  b=ibp5puhoKfSAeV4GUxCAV+YQd25s2/GiBA7k7dFmSSIjpcwIyau54545SdTRmF8yLDCrVCkll+fa2VERESLpNWDnYyQXrW+BjhSTRcDOnYYKTCghkbNc4FaspEdRvuMzzL1OeQ7LFsrp9HJ6N0NheRIyDQAG4hFM=    Your public key is:    "k=rsa\\;  p=MIGfMA0GCSqGSIb3DQEBAQUAA4G6WPaESxPzkXT1jsC+ivorJij0gXi39XlBFJfB1/M+quGClJPNquvJAeEWZ7SOQI7pRPcvnwySapJMfjIlH7n23F4eDwkJO8lEvC5HhTM13Ecz5QVe+/jM8jUgGjuF57NsCmPXMXw6TAKcFUyNcK6plnfsRyzgBbGLZr2ishWZQIDAQAB"  Key length: 1024bits    Your DKIM signature is not valid  

(I removed some characters from the keys)

I have made a TXT entry into the domains DNS as follows :

key1._domainkey.twitterautopost.co.uk = k=rsa\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDM6WPaESxPzkXT1jsC+ivorJij0gXi39XlBFJfB1/M+quGClJPNquvJAeEWZ7SOQI7pRPcv3F4eDwkJO8lEvC5HhTM13Ecz5QVe+/jM8jUgGjuF57NsCmPXMXw6TAKcFUyNcK6plnfsRyzgBbGLZr2ishWZQIDAQAB  

Not sure what to try now.

Any help would be greatly appreciated. Thanks.

How to configure Nginx to support PHP-FPM on Mac OS X

Posted: 02 Apr 2021 08:05 PM PDT

I am making a website with a contact form, and I want to use PHP. The website is hosted on my simple local server using NGINX. I know NGINX uses FastCGI and PHP-FPM, but being new to this, I'm not sure how to configure everything to make it all work. This is what I have in my nginx.conf file:

server {      listen       7070;      server_name  localhost;        #charset koi8-r;        #access_log  logs/host.access.log  main;        location / {          root   /Users/vibhusharma/Sites/JCA;          access_log "/Users/vibhusharma/Sites/JCA/jca_access.log";          error_log  "/Users/vibhusharma/Sites/JCA/jca_errors.log";          index  index.html index.htm;            try_files $uri $uri/ /index.html =404;            #fastcgi_split_path_info ^(.+\.php)(/.+)$;           #fastcgi_pass 127.0.0.1:9000;          #fastcgi_index index.php;           #fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;           #fastcgi_buffers 256 128k;           #fastcgi_connect_timeout 300s; fastcgi_send_timeout 300s;           #fastcgi_read_timeout 300s;           #include fastcgi_params;      }        #error_page  404              /404.html;        # redirect server error pages to the static page /50x.html      #      error_page   500 502 503 504  /50x.html;      location = /50x.html {          root   html;      }        # proxy the PHP scripts to Apache listening on 127.0.0.1:80      #      #location ~ \.php$ {      #    proxy_pass   http://127.0.0.1;      #}        pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000        location ~ \.php$ {          root           html;          fastcgi_pass   127.0.0.1:9000;          fastcgi_index  index.php;          fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;          include        fastcgi_params;      }  

What else do I need to do to get my server to interpret my php file? Thank you!

How can I add a usb device to a domain xml from an image file?

Posted: 02 Apr 2021 06:00 PM PDT

I don't (repeat DONT) want to passthrough a physical usb device. I'm making a usb img file like so:

dd if=/dev/null bs=1K of=/tmp/test_usb.img seek=2040  mkfs.vfat /tmp/test_usb.img  

I am then attempting to create the domain with the following xml:

<disk type="file" device="disk">      <driver name="qemu" type="raw/>      <source file="/tmp/test_usb.img"/>      <target dev="sda" bus="usb"/>  </disk>  

Inside the windows guest, it shows up as a QEMU USB HARDDISK (or similar name), but I cannot access it. Now, I'm not intending for this to show up as a usb-harddrive. I want it to be a plain-old FAT32 USB.

How do I do this?

EDIT (added picture)

Properties of QEMU USB HARDDISK

Site-To-Site VPN with Quagga

Posted: 02 Apr 2021 09:05 PM PDT

Can Quagga broadcast a statically configured routes to all the computers in that network.

Say for example I have network 10.0.1.0/16. In one of the PC in this network I am running Quaaga server and have configured a route there. Can this route be broadcasted to all the elements in this network?

I was trying to find some online help. But, was not successful.

I need this for the following scenario of mine -

I am using SoftEther VPN Server to make a site to site VPN say between 10.0.10/16 and 30.0.1.0/16 and now the site to site VPN is established, I want to install a route on all the elements in that 10.0.1.0/16 to send the traffic to 30.0.1.0 through this VPN server.

Can anyone please help me out with the Quagga configuration to do this.

How to prevent sub-domain 'hijacking' on the same DNS server?

Posted: 02 Apr 2021 04:02 PM PDT

I'm trying to understand how (or if) a DNS server differentiates between a sub-domain setup as a zone and one setup as a record within a domain zone on the same server.

Say I were to create a DNS zone on a DNS server for a domain e.g. example.com.

What is to stop someone from creating another zone, test.example.com, on the same server and 'hijacking' that sub-domain of the domain?

When a DNS request is made to the name server for test.example.com, will the DNS server return:

  • The main A record of the test.example.com zone or
  • The test.example.com A record in the example.com zone

(and if the A record for test.example.com doesn't exist in example.com will it return no such record or continue onto the zone of test.example.com)

Is there any way of preventing the sub-domain zone from responding without moving the domains to their own unique name server? How do the likes of ZoneEdit and Amazon's Route53 handle this?

(If a sub-domain was hosted on a separate server the master zone for example.com would have to delegate the sub-domain to that separate server, correct? (as per this Technet article).)

Extremely slow startup of tomcat

Posted: 02 Apr 2021 04:02 PM PDT

I have a tomcat 7 installation on a Solaris 10 server. My problem is that starting the server (or deploying a new war) is extremely slow. It usually take 30 - 60 minutes. The war application is a medium sized grails application so there are quite a a lot of files. The server is running other server applications as well but from my basic skills I don't see this as a problem.

Can anyone give me some tips on how to analyse this? Settings in Tomcat, java, server, disc access or something else?

I use these parameters to tomcat:

CATALINA_OPTS="-Dcom.sun.management.jmxremote=true -Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1536m -Xmx1536m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=512m  -XX:MaxPermSize=512m -XX:+DisableExplicitGC"  

And I use a 32 bit java 1.6.

How to exclude certain directories while using wget?

Posted: 02 Apr 2021 03:29 PM PDT

I'd like to download a directory from a FTP, which contains some source codes. Initially, I did this:

wget -r ftp://path/to/src  

Unfortunately, the directory itself is a result of a SVN checkout, so there are lots of .svn directories, and crawling over them would take longer time. Is it possible to exclude those .svn directories?

No comments:

Post a Comment