Sunday, October 3, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Setting up logging of control-plane pods to a specific files

Posted: 03 Oct 2021 10:50 PM PDT

Log files for the control-plane pods don't exist in the /var/log directory. I tried to enable kube-apiserver logging to a specific file in Kubernetes (v1.20.2). I added the following keys to the kube-apiserver manifest:

spec:    containers:    - command:      - kube-apiserver  ...      - --log-dir=/var/log      - --log-file=/var/log/kube-apiserver.log      - --logtostderr=false  

But it didn't work, I still can't see the /var/log/kube-apiserver.log file. Also I tried to add these same keys for kube-scheduler and kube-apiserver manifests but this doesn't work. Can I enable write logs to special files in the /var/log or any other directory?

setting up gre tunnel on debian with tunnel source and tunnel destination

Posted: 03 Oct 2021 10:28 PM PDT

I am trying to setup a gre tunnel on debian to receive broadcast packets.

I have the information for an interface

ip address [10.16.2.4 255.255.255.192]  tunnel source [10.16.0.2]  tunnel destination [10.16.254.1]  

When I try to create the tunnel with these commands

$ sudo ip tunnel add gre0 mode gre remote 10.16.2.4 local 192.168.1.101 ttl 255  $ sudo ip link set gre0 up  

The interface comes up but I do not get any traffic on this tunnel.

How do I properly use ip address, tunnel source and tunnel destination to create the gre tunnel?

Using Putty/plink to connect to remote MySQL from Windows machine using Port Forwarding and multi hop SSH tunnel

Posted: 03 Oct 2021 09:09 PM PDT

I need to set up port forwarding from my local Windows machine Port 3307 to a remote MySQL server port 3306 but accessed via 2 Linux proxy servers and a Linux web server.

I need to use Putty or plink.exe on the Windows machine to set up the connnection.

See diagram enter image description here

I've found examples using Putty GUI or plink CLI to achieve similar with only 1 proxy server but not with multiple hops.

I can achieve the connection I need on a *nix machine using
ssh -N -L 127.0.0.1:3307:db-server:3306 -J user@proxy1 user@proxy2 user@web-server

Trying to do the same using Putty or plink.

cacti container mysql service unable to start

Posted: 03 Oct 2021 08:07 PM PDT

I tried to run this cacti container but MySQL failed to start, any idea, I have tried all sorts of answers from the site.

https://hub.docker.com/r/quantumobject/docker-cacti

  [cacti@rhelvm3 ~]$ podman rm 82564cf157fe  82564cf157fec818d6212cf0dfcf2e669f3f993010895903702d778c4872be87  [cacti@rhelvm3 ~]$ podman ps -a  CONTAINER ID  IMAGE                                        COMMAND        CREATED     STATUS         PORTS                 NAMES  4fdea5925e5b  docker.io/quantumobject/docker-cacti:latest  /sbin/my_init  2 days ago  Up 2 days ago  0.0.0.0:8080->80/tcp  laughing_lederberg  [cacti@rhelvm3 ~]$ podman exec -it 4fdea5925e5b /bin/bash  root@4fdea5925e5b:/# service mysql status   * MariaDB is stopped.  root@4fdea5925e5b:/# service mysql start   * Starting MariaDB database server mysqld                                                                                       [fail]   root@4fdea5925e5b:/# cat /etc/my  my_init.d/ mysql/  root@4fdea5925e5b:/# cat /etc/mysql/  conf.d/          debian.cnf       mariadb.conf.d/  my.cnf-bkup  debian-start     mariadb.cnf      my.cnf           my.cnf.fallback  root@4fdea5925e5b:/# cat /etc/mysql/my.cnf    [mysqld]  socket=/var/lib/mysql/mysql.sock    max_heap_table_size = 1073741824  max_allowed_packet = 16777216  tmp_table_size = 500M  join_buffer_size = 1000M  innodb_file_format=Barracuda  innodb_large_prefix=1  innodb_io_capacity=5000  innodb_buffer_pool_instances=62  innodb_buffer_pool_size = 7811M  innodb_additional_mem_pool_size = 80M  innodb_doublewrite = ON  innodb_flush_log_at_timeout = 10  innodb_read_io_threads = 32  innodb_write_io_threads = 16  collation-server = utf8mb4_unicode_ci  character-set-server = utf8mb4    default-time-zone = America/New_York    [client]    socket=/var/lib/mysql/mysql.sock  root@4fdea5925e5b:/# ls -als /var/lib/mysql/mysql.sock  0 -rwxrwxrwx. 1 mysql mysql 0 Oct  3 22:59 /var/lib/mysql/mysql.sock  root@4fdea5925e5b:/# tail /var/log/mysql/error.log  2021-01-31 17:33:37 139652315438208 [Note] InnoDB: Using Linux native AIO  2021-01-31 17:33:37 139652315438208 [Note] InnoDB: Using SSE crc32 instructions  2021-01-31 17:33:37 139652315438208 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2021-01-31 17:33:37 139652315438208 [Note] InnoDB: Completed initialization of buffer pool  2021-01-31 17:33:37 139652315438208 [Note] InnoDB: Highest supported file format is Barracuda.  2021-01-31 17:33:37 139652315438208 [Note] InnoDB: 128 rollback segment(s) are active.  2021-01-31 17:33:37 139652315438208 [Note] InnoDB: Waiting for purge to start  2021-01-31 17:33:37 139652315438208 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.49-89.0 started; log sequence number 1616713  2021-01-31 17:33:37 139652315438208 [Note] Plugin 'FEEDBACK' is disabled.  2021-01-31 17:33:37 139651662735104 [Note] InnoDB: Dumping buffer pool(s) not yet started  root@4fdea5925e5b:/# mysql -u root -p  Enter password:  ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111 "Connection refused")  root@4fdea5925e5b:/#     

Cant install certbot using snap return: Run configure hook of "certbot" snap if present

Posted: 03 Oct 2021 08:10 PM PDT

I want to install certbot using snap, but when I tried Official instruction:

sudo snap install core; sudo snap refresh core  sudo snap install --classic certbot  

but returns:

error: cannot perform the following tasks:

  • Run configure hook of "certbot" snap if present (run hook "configure": /var/lib not root-owned 1000:1000)

Previously, I uninstalled certbot(from apt) by using the below command:

sudo apt-get remove -y certbot python3-certbot-apache  

What happened? I am using ubuntu 18.04 Bionic

From the error, looks like I need to change owner of /var/lib to root user. But I changed and still got the same error. Should I change to root recursively to that folder?

here is ls to the /var/lib:

drwxr-xr-x  4 myuser       myuser       4.0K Mar 10  2020 AccountsService  drwxr-xr-x  5 myuser       myuser       4.0K Mar 24  2020 apache2  drwxr-xr-x  5 myuser       myuser       4.0K Oct  4 02:36 apt  drwxr-xr-x  2 root      root      4.0K Sep  8 02:03 binfmts  drwxr-xr-x  2 myuser       myuser       4.0K May  5  2018 command-not-found  drwxr-xr-x  2 myuser       myuser       4.0K Mar 10  2020 dbus  drwxr-xr-x  2 myuser       myuser       4.0K Apr 16  2018 dhcp  drwxr-xr-x  3 myuser       myuser       4.0K Mar 10  2020 dictionaries-common  drwxr-xr-x  7 myuser       myuser       4.0K Oct  4 02:36 dpkg  drwxr-xr-x  2 myuser       myuser       4.0K Dec  9  2019 git  drwxr-xr-x  3 myuser       myuser       4.0K Mar 10  2020 grub  drwxr-xr-x  2 myuser       myuser       4.0K Oct  3 14:52 initramfs-tools  drwxr-xr-x  2 landscape landscape 4.0K Mar 10  2020 landscape  drwxr-xr-x  4 myuser       myuser       4.0K Oct  4 02:34 letsencrypt  drwxr-xr-x  3 myuser       myuser       4.0K Mar 10  2020 locales  drwxr-xr-x  2 myuser       myuser       4.0K Oct  3 06:25 logrotate  drwxr-xr-x  2 root      root         0 Oct  4 02:56 lxcfs  drwxr-xr-x  2 lxd       nogroup   4.0K Jun 16 00:42 lxd  drwxr-xr-x  2 myuser       myuser       4.0K Mar 10  2020 man-db  drwxr-xr-x  2 myuser       myuser       4.0K Apr 24  2018 misc  drwxr-xr-x  2 myuser       myuser       4.0K Oct  3 06:25 mlocate  drwx------ 35 mysql     mysql     4.0K Oct  3 14:51 mysql  drwx------  2 mysql     mysql     4.0K Mar 24  2020 mysql-files  drwx------  2 mysql     mysql     4.0K Mar 24  2020 mysql-keyring  drwxr-xr-x  2 root      root      4.0K Jan 21  2020 mysql-upgrade  drwxr-xr-x  2 myuser       myuser       4.0K Oct  3 14:50 pam  drwxr-xr-x  4 myuser       myuser       4.0K Mar 24  2020 php  drwxr-xr-x  2 myuser       myuser       4.0K Apr  4  2019 plymouth  drwx------  3 myuser       myuser       4.0K Mar 10  2020 polkit-1  drwxr-xr-x  6 postgres  postgres  4.0K Oct  3 14:50 postgresql  drwx------  3 myuser       myuser       4.0K Mar 10  2020 private  drwxr-xr-x  2 myuser       myuser       4.0K Mar 24  2020 python  drwxr-x---  2 redis     redis     4.0K Feb 24  2021 redis  drwxr-xr-x 21 myuser       myuser       4.0K Oct  4 02:46 snapd  drwxr-xr-x  3 myuser       myuser       4.0K Mar 10  2020 sudo  drwxr-xr-x  7 root      root      4.0K Oct  4 02:36 systemd  drwxr-xr-x  3 root      root      4.0K Jun 27 13:26 ubuntu-advantage  drwxr-xr-x  2 myuser       myuser       4.0K Oct  3 14:53 ubuntu-release-upgrader  drwxr-xr-x  3 myuser       myuser       4.0K Oct  3 14:52 ucf  drwxr-xr-x  2 myuser       myuser       4.0K Nov 25  2019 unattended-upgrades  drwxr-xr-x  2 myuser       myuser       4.0K Mar 10  2020 update-manager  drwxr-xr-x  4 myuser       myuser       4.0K Oct  4 02:44 update-notifier  drwxr-xr-x  3 myuser       myuser       4.0K Mar 10  2020 ureadahead  drwxr-xr-x  2 myuser       myuser       4.0K Mar 10  2020 usbutils  drwxr-xr-x  3 myuser       myuser       4.0K Mar 10  2020 vim  

AWS: What is the difference between Burst Balance and EBS IO Balance metrics?

Posted: 03 Oct 2021 06:59 PM PDT

AWS Docs describe Burst Balance and EEBS IO Balance in the following way:

BurstBalance The percent of General Purpose SSD (gp2) burst-bucket I/O credits available.

EBSIOBalance% The percentage of I/O credits remaining in the burst bucket of your RDS database. This metric is available for basic monitoring only. This metric is different from BurstBalance

However, as far as I know, the docs do not explain how those two metrics are different.

Nginx can not serve some of hugo pages

Posted: 03 Oct 2021 06:15 PM PDT

I am using nginx to server hugo static site. But It cant serve some of pages such as domain/posts, domain/tags, and domain/about.. this is my nginx conf

server {      listen       80;      server_name mydomain;        root /myblog;      index index.html;        location / {          try_files $uri $uri/ =404;      }  }    

It works fine in both of hugo server and vercel. this is vercel link this is tree of public(result of build)

.  ├── about  │   └── index.html  ├── about-hugo  │   └── index.html  ├── about-us  │   └── index.html  ├── categories  │   ├── index.html  │   └── index.xml  ├── contact  │   └── index.html  ├── css  │   ├── dark.726cd11ca6eb7c4f7d48eb420354f814e5c1b94281aaf8fd0511c1319f7f78a4.css  │   ├── fonts.b685ac6f654695232de7b82a9143a46f9e049c8e3af3a21d9737b01f4be211d1.css  │   └── main.2f9b5946627215dc1ae7fa5f82bfc9cfcab000329136befeea5733f21e77d68f.css  ├── fonts  │   ├── fira-sans-v10-latin-regular.eot  │   ├── fira-sans-v10-latin-regular.svg  │   ├── fira-sans-v10-latin-regular.ttf  │   ├── fira-sans-v10-latin-regular.woff  │   ├── fira-sans-v10-latin-regular.woff2  │   ├── ibm-plex-mono-v6-latin-500italic.eot  │   ├── ibm-plex-mono-v6-latin-500italic.svg  │   ├── ibm-plex-mono-v6-latin-500italic.ttf  │   ├── ibm-plex-mono-v6-latin-500italic.woff  │   ├── ibm-plex-mono-v6-latin-500italic.woff2  │   ├── roboto-mono-v12-latin-regular.eot  │   ├── roboto-mono-v12-latin-regular.svg  │   ├── roboto-mono-v12-latin-regular.ttf  │   ├── roboto-mono-v12-latin-regular.woff  │   └── roboto-mono-v12-latin-regular.woff2  ├── index.html  ├── index.xml  ├── js  │   ├── feather.min.js  │   ├── main.js  │   └── themetoggle.js  ├── page  │   ├── 1  │   │   └── index.html  │   ├── 2  │   │   └── index.html  │   └── 3  │       └── index.html  ├── posts  │   ├── index.html  │   ├── index.xml  │   ├── post-1  │   │   └── index.html  │   ├── post-2  │   │   └── index.html  │   ├── post-3  │   │   └── index.html  │   ├── post-4  │   │   └── index.html  │   ├── post-5  │   │   └── index.html  │   ├── post-7  │   │   └── index.html  │   └── tg-gh  │       └── index.html  ├── sitemap.xml  └── tags      ├── index.html      ├── index.xml      ├── primer      │   ├── index.html      │   └── index.xml      ├── procrastinating      │   ├── index.html      │   └── index.xml      ├── space      │   ├── index.html      │   └── index.xml      └── todo          ├── index.html          └── index.xml  

in the nginx container log, It throws 301 status for the GET /posts/ to baseURL in config.toml of hugo. But I hasnt connected it to my domain. I can't understand Why nginx can not find files in subdirectories. Altough It serves domain/posts/post-name1, domain/posts/post-name2.... What should I configure to serve all of hugo pages correctly?

Ive also tried this conf in nginx.

location /posts {  alias /myblog/posts;  index index.html;  }  

How to calculate the QuickPath Interconnect (QPI) bandwidth?

Posted: 03 Oct 2021 07:42 PM PDT

For Xeon E5-2697 v2, Intel lists:

Bus Speed = 8 GT/s

# of QPI Links = 2

According to Wikipedia, one must know the QPI frequency and link width to calculate the QPI bandwidth, but these don't seem to be listed here. How to calculate it then?

Are the snapshots referenced by an AMI created from an EC2 instance independent of other snapshots that I've created on the EC2 instances EBS volumes?

Posted: 03 Oct 2021 07:10 PM PDT

We've recently migrated a system off of a set of EC2 instances, and now wish to retire those EC2 instances, first by creating an AMI of and then terminating each EC2 instance.

Over time, we've created a number of EBS snapshots of the volumes attached to the EC2 instances as a part of our maintenance strategy.

Will the EBS snapshots referenced by the AMIs we create be independent of the maintenance EBS snapshots? In other words, will the EBS snapshots referenced by the AMIs have pointers back to the most recent maintenance EBS snapshots, or will they reference the relevant blocks independently of the maintenance EBS snapshots?

Ultimately, I'm trying to determine whether, once the AMIs are created, if it is useful or worthwhile to delete the maintenance EBS snapshots from a cost and management overhead perspective.

Use multiple dockerized Nginx behind a host Nginx

Posted: 03 Oct 2021 08:57 PM PDT

I have multiple and different dockerized applications, each one comes with its proper Nginx service which sends traffic to its containers based on some rules.

I need to put those applications on the same server, so I added a new Nginx in the host that will handle SSL, and forward the traffic to the correct dockerized Nginx.

Question: Is it ok to use Nginx in the host which will forward traffic to multiple different dockerized Nginx? Does it have any known problems? will that affect performance?

NGINX TCP Load Balancing is ip-sticky when it should be random, per request

Posted: 03 Oct 2021 09:00 PM PDT

I have an NGINX server being used as a TCP load balancer. It is default to round-robin load balancing, so my expectation is that for a given client IP, every time they hit the endpoint they will get a different backend upstream server for each request. But instead what is happening is that they get the same upstream server every time, and each distinct client IP is getting a distinct upstream server. This is bad because my clients generate a lot of traffic and it is causing hotspots because any given client can only utilize one upstream server. It seems to slowly rotate a given client IP across the upstream servers; again I want it to randomly assign each request to an upstream per request.

How can I make NGINX randomely assign the upstream server for every request? I tried the random keyword and this had no effect. Any help would be greatly appreciated.

user nginx;  worker_processes auto;  error_log /var/log/nginx/error.log;  pid /run/nginx.pid;    # Load dynamic modules. See /usr/share/nginx/README.dynamic.  include /usr/share/nginx/modules/*.conf;    events {      worker_connections 1024;  }    stream {        upstream api_backend_http {          server node1.mydomain.com:80;          server node2.mydomain.com:80;          server node6.mydomain.com:80;          server node14.mydomain.com:80;          server node18.mydomain.com:80;          server node19.mydomain.com:80;          server node21.mydomain.com:80;          server node22.mydomain.com:80;          server node24.mydomain.com:80;      }        upstream api_backend_https {          server node1.mydomain.com:443;          server node2.mydomain.com:443;          server node6.mydomain.com:443;          server node14.mydomain.com:443;          server node18.mydomain.com:443;          server node19.mydomain.com:443;          server node21.mydomain.com:443;          server node22.mydomain.com:443;          server node24.mydomain.com:443;      }        server {          listen            80;          proxy_pass        api_backend_http;          proxy_buffer_size 16k;          proxy_connect_timeout 1s;      }        server {          listen            443;          proxy_pass        api_backend_https;          proxy_buffer_size 16k;          proxy_connect_timeout 1s;      }          }  

IPv6 Networking with a Linux Router [closed]

Posted: 03 Oct 2021 09:30 PM PDT

I currently have a small office router running Voice Linux. IPv4 routing is currently working, and I appear to be getting an IPv6 address from my ISP. I have Radvd running on the router, but my other Linux and Windows machines don't appear to be getting globally scoped IPv6 addresses.

My networking setup is in /etc/rc.local. I've used udev rules to name my external adapter wan0 and the internal adapter lan0. I've bridged my wired and wireless networks using brlan.

# Default rc.local for void; add your custom commands here.  #  # This is run by runit in stage 2 before the services are executed  # (see /etc/runit/2).  #    # US Region Wi-Fi    modprobe -r iwlmvm  modprobe cfg80211  ieee80211_regdom=US  modprobe iwlmvm    ip link set dev lan0 up  brctl addbr brlan  brctl addif brlan lan0    ip link set dev wlp2s0 up  brctl addif brlan wlp2s0    ip addr add 10.10.10.1/24 dev brlan  ip link set dev brlan up  

My dhcpcd-wan service just runs dhcpcd -B wan0 and I'm getting an IPv6 from my ISP. ping6 and other IPv6 specific commands work from the router:

ip -br -c a   lo               UNKNOWN        127.0.0.1/8 ::1/128             lan0             UP             fe80::127b:44ff:fe52:7188/64   wan0             UP             73.212.<redacted>/23 fe80::33fc:53ef:5beb:4420/64   wlp2s0           UP             fe80::9eb6:d0ff:fe1c:e383/64   brlan            UP             10.10.10.1/24 2601:341:<redacted>::1/68 fe80::127b:44ff:fe52:7188/64   wg0              UNKNOWN        10.10.90.2/24   

I was a little confused over Radvd. Most of the examples I have found use fixed/static IPv6 ranges. I've attempted to use the following:

interface brlan  {    AdvSendAdvert on;    MaxRtrAdvInterval 300;    prefix ::/64    {      AdvOnLink on;      AdvAutonomous on;      AdvRouterAddr on;    };  };  

Radvd is running, but nothing appears to be getting an IPv6 address.

I also use dhcpd for assigning IPv4 addresses, statically configure some of them using host rules. I'm not opposed to using dhcpd for IPv6 without Radvd, but am just not sure how to configure it.

ddns-update-style none;  option domain-name "penguin.farm";  option domain-name-servers 10.10.10.1;  default-lease-time 600;  max-lease-time 7200;  authoritative;  log-facility local7;    subnet 10.10.10.0 netmask 255.255.255.0 {    range 10.10.10.50 10.10.10.100;    option routers 10.10.10.1;  }    host linux1 {    hardware ethernet 4c:ed:fb:<redacted>;    fixed-address 10.10.10.22;  }  

I also think my ip6tables rules are setup correctly to allow/forward the necessary

*filter  :INPUT DROP [83:26048]  :FORWARD DROP [0:0]  :OUTPUT ACCEPT [23:2954]  -A INPUT -j ACCEPT -m state --state RELATED,ESTABLISHED  -A INPUT -j ACCEPT -p icmpv6  -A INPUT -j REJECT --reject-with icmp6-adm-prohibited  -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited  -A FORWARD -p ipv6-icmp -j ACCEPT  -A OUTPUT -p ipv6-icmp -j ACCEPT  COMMIT  

What do I need to do so my Windows and Linux boxes (Linux is using NetworkManager) to get IPv6 addresses?

Package MariaDB-shared requires MariaDB-common, but none of the providers can be installed

Posted: 03 Oct 2021 04:13 PM PDT

I am trying to update the packages on my CentOS 8 server, but when I run sudo dnf update, the following message shows up:

Last metadata expiration check: 0:43:34 ago on Wed 12 May 2021 11:59:25 AM CEST.  Error:    Problem 1: package MariaDB-shared-10.3.29-1.el8.x86_64 requires MariaDB-common, but none of the providers can be installed    - cannot install the best update candidate for package mariadb-connector-c-3.0.7-1.el8.x86_64    - package MariaDB-common-10.3.27-1.el8.x86_64 is filtered out by modular filtering    - package MariaDB-common-10.3.28-1.el8.x86_64 is filtered out by modular filtering    - package MariaDB-common-10.3.29-1.el8.x86_64 is filtered out by modular filtering   Problem 2: package MariaDB-shared-10.3.29-1.el8.x86_64 requires MariaDB-common, but none of the providers can be installed    - cannot install the best update candidate for package mariadb-connector-c-config-3.0.7-1.el8.noarch    - package MariaDB-common-10.3.27-1.el8.x86_64 is filtered out by modular filtering    - package MariaDB-common-10.3.28-1.el8.x86_64 is filtered out by modular filtering    - package MariaDB-common-10.3.29-1.el8.x86_64 is filtered out by modular filtering  (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)  

There is a conflict with packages installed and packages that are available for update. How can I fix this problem so that everything updates smoothly?

Unexpected 404 error on all routes laravel application all of a sudden - NGINX|PHP-FPM

Posted: 03 Oct 2021 06:06 PM PDT

I have the following nginx config file

##  # You should look at the following URL's in order to grasp a solid understanding  # of Nginx configuration files in order to fully unleash the power of Nginx.  # http://wiki.nginx.org/Pitfalls  # http://wiki.nginx.org/QuickStart  # http://wiki.nginx.org/Configuration  #  # Generally, you will want to move this file somewhere, and start with a clean  # file but keep this around for reference. Or just disable in sites-enabled.  #  # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.  ##    # Default server configuration  #      server {        root /var/www/open_final/current;               index index.html index.htm;            # Make site accessible from http://localhost/          server_name app.mypersonaldomain.co;          if ($http_x_forwarded_proto != "https") {            rewrite ^(.*)$ https://$server_name$REQUEST_URI permanent;          }    #        if ($http_user_agent ~* '(iPhone|iPod|android|blackberry)') {  #         return 301 https://mobile.mypersonaldomain.co;  #        }      location / {                  # First attempt to serve request as file, then                  # as directory, then fall back to displaying a 404.          #       try_files $uri $uri/ =404;                  try_files $uri $uri/ /index.html;          # Uncomment to enable naxsi on this location                  # include /etc/nginx/naxsi.rules          }  }      server {        root /var/www/open-backend-v2/current/public;      index index.php index.html index.htm;        server_name localhost v2-api.mypersonaldomain.co;        location / {          try_files $uri $uri/ /index.php$is_args$args;      }        error_page 404 /404.html;      error_page 500 502 503 504 /50x.html;      location = /50x.html {          root /usr/share/nginx/html;      }        location ~ \.php$ {          try_files $uri =404;          fastcgi_split_path_info ^(.+\.php)(/.+)$;          fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;          fastcgi_index index.php;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          include fastcgi_params;      }  }  

I have two applications running on this nginx server. One is a Laravel(PHP) application and an Angular application (Front-end) running. I have noticed that last week, all the backend application (PHP) routes started throwing 404 Not Found errors. I restarted my nginx, still it was coming. Finally I restarted my aws instance and it started working fine. Again yesterday all of a sudden , the URLs started throwing 404 all of a sudden and I had to restart the instance.

The front-end application was loading but the backend (Laravel-PHP) urls was throwing 404.

I suspect if its some hacker doing it. In the past 2 years this was not happening and it started coming from last week.

What could be the reason for it? Is it like someone tampering the .htaccess file or is it something to do with nginx config. But if so why on the laravel application routes are showing 404.

Need help on this. What could be the reason for this ? Has anyone faced this issue ?

How to let dnsmasq transfer a reverse zone?

Posted: 03 Oct 2021 05:03 PM PDT

Following the documentation for auth-zone, I tried to declare my dnsmasq server as authoritative for the 10.0.0.0/8 zone (I serve several IP sub-ranges in 10.x).

Unfortunately, whatever I try I end up with

Sep 07 14:37:36 bind named[6812]: transfer of '10.in-addr.arpa/IN' from 10.100.10.254#53: connected using 10.200.0.158#57941  Sep 07 14:37:36 bind named[6812]: transfer of '10.in-addr.arpa/IN' from 10.100.10.254#53: failed while receiving responses: SERVFAIL  Sep 07 14:37:36 bind named[6812]: transfer of '10.in-addr.arpa/IN' from 10.100.10.254#53: Transfer status: SERVFAIL  Sep 07 14:37:36 bind named[6812]: transfer of '10.in-addr.arpa/IN' from 10.100.10.254#53: Transfer completed: 0 messages, 0 records, 0 bytes, 0.001 secs (0 bytes/sec)  

on the secondary BIND server (the direct zones are transferred OK).

How should I set this up?

The whole current configuration file for dnsmasq:

no-resolv  no-poll  server=1.1.1.1  server=8.8.4.4  expand-hosts  domain=example.com  domain-needed    auth-server=example.com,lan0,br0  auth-zone=example.com,10.0.0.0/8,lan0,br0  auth-sec-servers=rpi1,bind    dhcp-range=10.100.10.1,10.100.10.230,240h  dhcp-range=10.100.20.1,10.100.20.230,240h  (... more DHCP ranges ...)  dhcp-option=option:ntp-server,129.104.30.42,195.220.194.193  dhcp-option=option:dns-server,10.100.10.30,10.200.0.158  dhcp-authoritative  

The config of the secondary BIND server:

zone "example.com" {    type slave;    masters { 10.100.10.254; };    file "/etc/bind/db.example.com";  };    zone "10.in-addr.arpa" {    type slave;    masters { 10.100.10.254; };    file "/etc/bind/db.10";  };  

Configuring a DELL storage array with both controllers set to the same address

Posted: 03 Oct 2021 11:07 PM PDT

I've gotten hold of a used Dell PowerVault MD1000 with two MD3000i (aka AMP01) controllers, both of which are configured to the same address. However my attempts to configure them - or indeed perform any action, even "Blink" - fail with "Error connecting to the array management port(s). Please check the management port(s) to verify they are accessible". Additionally, while controller 0 reports status "Optimal", controller 1 reports status "ServiceMode", which I presume indicates an error.

Here's what I've accomplished so far:

  • Get Dell Modular Disk Configuration Utility to identify both ports by connecting the Mgmt port of controller 1 and one of the data port of controller 0 to the network
  • Get an interactive telnet session on the embedded VxWorks by following the instructions here, and having done that -
    • Reset the password using clearSYMbolPassword
    • Dump the existing configuration using netCfgShow
    • Invoke sysWipe. However, it exits immiediately printing the message below. I presume the problem is with controller 1 being in service mode. On controller 0 I get:

-> sysWipe Executing sysWipe. Boards will reboot on completion. 03/27/18-14:58:39 (GMT) (tShellRem1): WARN:
symbol::ControllerInServiceModeException Line 613 File samSymbol.cc value = 201459792 = 0xc020850

Is there a way I can configure the IP addresses, other than getting hold of a dedicated Dell DB9 - PS2 cable?

Can I completely disable controller1 and just use controller 0, which appears to be in better shape? Unfortunately simply pulling out controller 1 doesn't help.

I'm attaching a screenshot from the Dell utility that depicts the configuration. Should I have been able to see the disks here?

Screenshot

Thanks for your attention!

elasticsearch: max file descriptors [1024] for elasticsearch process is too low, increase to at least [65536]

Posted: 03 Oct 2021 04:06 PM PDT

When I tried to run the logging aggregation I found out the following error generated by elasticsearch:

[2018-02-04T13:44:04,259][INFO ][o.e.b.BootstrapChecks ] [elasticsearch-logging-0] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks  ERROR: [1] bootstrap checks failed  [1]: max file descriptors [1024] for elasticsearch process is too low, increase to at least [65536]  [2018-02-04T13:44:04,268][INFO ][o.e.n.Node ] [elasticsearch-logging-0] stopping ...  [2018-02-04T13:44:04,486][INFO ][o.e.n.Node ] [elasticsearch-logging-0] stopped  [2018-02-04T13:44:04,486][INFO ][o.e.n.Node ] [elasticsearch-logging-0] closing ...  [2018-02-04T13:44:04,561][INFO ][o.e.n.Node ] [elasticsearch-logging-0] closed  [2018-02-04T13:44:04,564][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started  

BTW I am running a kubernetes cluster v1.8.0 on minions and 1.9.0 on masters using cri-containerd on Ubuntu machines 16.04.

Any help will be appreciated.

IPSEC over GRE tunnel on PFSENSE

Posted: 03 Oct 2021 07:06 PM PDT

I have two PFsense routers xxx.xxx.xxx.28 and xxx.xxx.xxx.27 and local networks behind them 192.168.110.0/24 and 192.168.111.0/24. The point is to set up GRE tunnel with IPSEC between these networks. Actually, the tunnel is already done(10.0.0.1-10.0.0.2) and ipsec configured. But here is the problem. If to check the IPSEC status in the PFsense web interface there will be NO traffic through IPSEC... Nor from PING neither from sending any files. The only traffic that shown here is the answer s at the traceroute. I mean if i will make traceroute from 110 network to some host in the 111 network then i will see only incoming 3 packets on the router at the 110 network. And i will see 3 outgoing packets on the router at the 111 network. That is all. The questions are:

Is the IPSEC over GRE working proper ? Why there is no traffic in the IPSEC status?

Microsoft Outlook freeze for users on terminal server - Exchange 2013 in-house

Posted: 03 Oct 2021 06:06 PM PDT

This has been ongoing for about six months. Microsoft Support is also clueless.

Periodically (aprox. twice a day [two different users]), Outlook will freeze on a customer's terminal server session, forcing them to force close and start it back up. There is only one symptom that is common between every occurance - CPU usage is stuck at 6%. What's interesting is, the TS had Office 2010 installed, and this happened only to about five users out of the total 45. We tried an upgrade to Office 2013, and now those five users don't experience this problem, but five different users do.

We have about 45 users on a Server 2008 R2 terminal server assigned with 52GB of RAM and 8 CPU cores on a Server 2012 R2 Hyper-V Host (2x Intel Xeon E5-2640).

Outlook is connected to the on-premise Exchange 2013 server - same host, but VM is Server 2012 R2 and has 18GB of RAM assigned with 8 CPU.

This has persisted across two AD domains, three terminal server rebuilds, and two Exchange server installations with new databases per instance. I've rebuilt the Exchange DB, created new DBs, tried to repair mailboxes, etc. Exchange is at the latest CU.

Event logs show nothing in either the Exchange Server or on the Terminal Server in regards to this issue.

How to manually select a specific disk in the UDI Wizard when using MDT 2013?

Posted: 03 Oct 2021 07:06 PM PDT

Is there a way to manually select a disk or partition as part of the MDT Deployment UDI Wizard rather than MDT automatically selecting one.

I know you can select one in the Task Sequence but I'm specifically looking for a way to do this in the wizard.

Thanks for any help you can provide!

Joshua

ELB and SSLInsecureRenegotiation

Posted: 03 Oct 2021 10:06 PM PDT

I currently have SSLInsecureRenegotiation set to off on my Apache 2.4 Amazon Linux server, but I am still failing over at SSLLabs (Secure Client-Initiated Renegotiation SUPPORTED). Do you know how to enable this on the ELB?

Amazon RDS load balancing and performance

Posted: 03 Oct 2021 03:06 PM PDT

My question is simple. Is it possible to load balance rds (master and read-replicas) using the same haproxy instance used for application load balancing?

This would mean that the IP of the application and the IP of the database would be the same. What would be a sample configuration for the mysql part?

For example for application load balancing there is something like this:

backend php_app_servers    balance roundrobin    option redispatch    option forwardfor    option httpchk GET /url.php    server php-app1 10.100.2.40:80 weight 16 maxconn 160 check inter 10s    server php-app2 10.81.4.104:80 weight 16 maxconn 160 check inter 10s    server php-app3 10.100.129.162:80 weight 16 maxconn 160 check inter 10s  

What settings should i pay attention to for mysql.

Also another issue I had is rds is performing pretty poorly compared to usual opsworks instances. The problem might be caused by the difference of availability zone between application servers and db server but the performance drop doesn't seem justified. We have a LOT of databases on a m3.2xlarge server. CloudWatch metrics show only low to average CPU, IO and memory usage metrics. The number of connections is also between 40 and 60 at any moment. Can such a difference be caused by the availability zone?

EC2 Load balancer and wildcard/multi-domain SSL certificates

Posted: 03 Oct 2021 10:06 PM PDT

I am running three sites through one load balancer (and 2 child-application servers)
One of the site's has an SSL certificate, but I'd like to add a certificate for the other two sites

What is my best route here?
I've found a huge variety of certificates out there, and am a bit confused

I've seen some that are wildcard (eg. limitless subdomains secured for one domain), and some that are multi-domain (eg. up to 100 domains under the one certificate)

Is there a certificate that handles both cases?
Or could I use a multi-domain one to handle subdomains as if it were a wildcard (each site only has 3 or 4 subdomains I need protected)

Would love some guidance here
Link to any articles dealing with this would be appreciate too

identifying vlan packets using tcpdump

Posted: 03 Oct 2021 08:08 PM PDT

I'm trying to figure out the vlan tagged packets that my host receives or sends to other hosts. I tried

tcpdump -i eth1 vlan 0x0070

But it didnt work. Has anyone tried to view the vlan packets through tcpdump before? Couldn't find much help searching the web!

rewrite engine: How to test if URL on different server is reachable?

Posted: 03 Oct 2021 08:08 PM PDT

I am trying to use the rewrite engine of nginx to redirect requests.
On the same machine I have a Jetty running on port 8080, which is delivering some content.

I want to check if the content is available on Jetty otherwise I want to redirect it.
For that I have to locations added, one is used for the redirect on the Jetty and the other should check it.

location /nexus/ {      rewrite ^/nexus/(.*)$ http://A:8080/nexus/$1 permanent;  }    location ~* "^/releases/([a-z]*)/(.*)$" {      try_files /nexus/content/repositories nexus/content/repositories /nexus/content/;      # rewrite ^/releases/test/(.*)$ http://A:8080/nexus/content/repositories/Test/$2 break;  }  

My idea was to use try_files, but the first parameters should be files or directories.
Is there any other method to test if an URL is accessible?

At the moment I am using Nginx for checking the reachable of the URL, perhaps I can better use Jetty, which is in front of Nexus.
Nginx was just a choice and if there are better possibilities, I am willing to switch. :)

Here some details about the environment:

enter image description here

VLAN & WiFi & DHCP with Cisco SG200

Posted: 03 Oct 2021 04:06 PM PDT

I'm trying to configure a small business network with one Cisco SG200-26, a Linux server and two TP-Link TL-WA801ND.

I have set up the APs to have two different SSIDs, Public and Staff, and have configured VLAN tagging with tags 5 & 6 respectively.

On the switch, I have created the VLANs and configured the server port and the AP ports to trunk.

I've configured the server to have the two VLAN networks with IP addresses, eth0.5 & eth0.6. The DHCP server is configured to give addresses on the correct subnets.

So:

eth0 has 192.168.0.0/24  eth0.5 has 192.168.5.0/24  eth0.6 has 192.168.6.0/24  

Now, the APs receive management IP addresses via DHCP in 192.168.0.0/24

I see connected devices requesting IP addresses (from server log):

Apr 12 13:08:33 server dhcpd: DHCPDISCOVER from 60:d8:19:xx:xx:xx (pc1) via eth0.5  Apr 12 13:08:33 server dhcpd: DHCPOFFER on 192.168.5.10 to 60:d8:19:xx:xx:xx (pc1) via eth0.5  

But I don't see them accepting the address. Suggestions welcome, I'm stumped!

Virtual box linux routing

Posted: 03 Oct 2021 11:07 PM PDT

Sorry for the basic question but I cant figure this one out. I want to set up a small network of linux servers for testing purposes.

So I have a host server running virtual box with the following interface:

wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.0.4  netmask 255.255.255.0  broadcast 192.168.0.255    Then a guest vm with the following networking set up:  eth0      Link encap:Ethernet  HWaddr 08:00:27:EA:15:4F              inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0    eth1      Link encap:Ethernet  HWaddr 08:00:27:E3:E2:BC              inet addr:172.16.0.1  Bcast:172.16.7.255  Mask:255.255.248.0  

And a second vm guest set up as follows:

eth0      Link encap:Ethernet  HWaddr 08:00:27:15:CA:14              inet addr:172.16.0.2  Bcast:172.16.7.255  Mask:255.255.248.0            inet6 addr: fe80::a00:27ff:fe15:ca14/64 Scope:Link  

I want to be able to route from vm 2 back to the host server. So I created a route telling vm 2 to send traffic for the 192.169.0.0 network via vm 1:

% route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  192.168.0.0     172.16.0.1      255.255.255.0   UG    0      0        0 eth0  172.16.0.0      0.0.0.0         255.255.248.0   U     0      0        0 eth0  169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0  

But I can not ping through to the 192.168.0.0 network from vm 2. Routing table on vm 1 is as follows:

% route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0  172.16.0.0      0.0.0.0         255.255.248.0   U     0      0        0 eth1  169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0  169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1  0.0.0.0         192.168.0.1     0.0.0.0         UG    0      0        0 eth0  

the routing table on the host server (running virtual box) is :

% route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.0.1     0.0.0.0         UG    0      0        0 wlan0  192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 wlan0  

so I guess my problem is that the host server knows nothing of my VM's 172.16.0.0/16 network and can't reply.

How can I fix this, will I have to use iptables and NATing?

ESENT fails to determine the minimum I/O block size

Posted: 03 Oct 2021 05:03 PM PDT

I'm trying to get RavenDB running in embedded mode on a shared/multi-tenant webhost. RavenDB relies on the ESENT storage API. The filesystem on the hosting machines is locked down. The RavenDB Initialize() call results in the following eventlog entry

Raven (20604) D:\Path\To\Website\App_Data\RavenDB\Data52e0e402-79d7-4f47-a219-3d1e2e73321c: An attempt to determine the minimum I/O block size for the volume "D:\" containing "D:\Path\To\Website\App_Data\RavenDB\logs\" failed with system error 5 (0x00000005): "Access is denied. ".  The operation will fail with error -1032 (0xfffffbf8).  

So presumably the executing process needs access to read some volume information and that is denied because the process is only given permissions to the parts of the volume relevant to it.

Anyone know what the relevant rights are, and whether they can be omitted somehow?

P.S.: someone with more karma than me please tag this ravendb and esent

Reputable Biometric Fingerprint Scanner & Access/Attendance Solutions?

Posted: 03 Oct 2021 08:51 PM PDT

I have a client, a school, looking to implement a fingerprint or hand scanner to track both employee time and student attendance.

As per earlier conversations on here, this is not for high security & access (i.e.- automatic door locks).

From what I've researched, the field is full of unknown companies, any based in China, offering brands that don't seem to have any reputation or case studies. It makes me very nervouse to recommend something from a market that seems quite unknown if not a tad nefarious.

Even the brand the client saw, and liked, makes me hesitate as when called, no one would quote a price and we were told a "dealer" would have to get back to us: http://www.galaxysys.com/index.php?tpl=readers/biometric/biometric

Any personal recommendations or experiences would be appreciated.

Thanks in Advance ~R

No comments:

Post a Comment