Friday, July 2, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How do I forward connections to a different IP/port?

Posted: 02 Jul 2021 10:15 AM PDT

I have an AWS EC2 instance which is behind a network (TCP) load balancer. I need the server to forward connections on port 80 to a different IP on port 80 (forward to 172.31.13.121:80). There are no AWS security rules that will interfere.

I have disabled the source/destination check on the instance and configured the server with the following:

sysctl net.ipv4.ip_forward=1    iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 172.31.13.121:80  iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  

When I curl on the server to the target server I get an expected 200 response. However, when I run curl -s -v 172.31.30.187 (where this is the local IP of the server needing to forward the traffic) I get an error:

 Failed to connect to 172.31.30.187 port 80: Connection refused  

This seems to indicate that the forwarding is not working. I had been expecting the forward to work and to receive the HTML response from the target server. What else do I need to do to make this work? I have made no other changes to iptables, as per below:

# iptables -L  Chain INPUT (policy ACCEPT)  target     prot opt source               destination             Chain FORWARD (policy ACCEPT)  target     prot opt source               destination             Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination       # iptables -L -t nat  Chain PREROUTING (policy ACCEPT)  target     prot opt source               destination           DNAT       tcp  --  anywhere             anywhere             tcp dpt:http to:172.31.13.121:80    Chain INPUT (policy ACCEPT)  target     prot opt source               destination             Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination             Chain POSTROUTING (policy ACCEPT)  target     prot opt source               destination           MASQUERADE  all  --  anywhere             anywhere       

Group Policy Management about:security_mmc

Posted: 02 Jul 2021 09:58 AM PDT

In Group Policy Management, when I click on an existing GPO, I get an Internet Explorer Enhanced Security Configuration messaged that "about:security_mmc.exe" is not a trusted site.

Error received when selecting a GPO

It happens every time I click a different GPO. I read to add this to the trusted site list, which I did. I confirmed it is in the policy when I do gpresult.

GPResult

But I'm still getting this message. Anything else I need to do so this doesn't keep popping up?

SSH Connection loss when roaming to an AP with a different OUI

Posted: 02 Jul 2021 09:52 AM PDT

I have major issues with a device (Ubuntu 20.04 installed) in a warehouse with multiple access points. I am frequently losing the SSH connection. Investigating the matter I found out that I lose connection whenever the device roams to a different AP (same SSID). Digging deeper, I found out that that the SSH connection is only killed when the device roams to an AP with a different OUI (lets say YY:YY:YY indicating a different AP Manufacturer or AP Model). When I reconnect to the device, I see that it is connected to the AP with OUI YY:YY:YY.

I compared the roaming logs (journalctl -fu wpa_supplicant@wlan0.service) when roaming to a different AP model with the logs when roaming to the same AP model. Besides the BSSIDs they are identical.

We use simple WPA2 PSK (no fancy 802.11r or anything). I always though that roaming happens on the WiFi clients.

Are you aware of anything that could cause such a behavior? Could it be a missconfiguration of the wpa_supplicant? Could it be a missconfiguration of the APs?

is cloudflare now using let's encrypt certificates for edge?

Posted: 02 Jul 2021 10:26 AM PDT

i just added a new domain to cloudflare and the edge certificate is let's encrypt r3, shown in control panel and by inspecting in browser when on the domain's website.

my existing domains still have the regular 1-yr certs. wonder if they'll switch to let's encrypt after expiration.

anyone else noticed?

How to create a systemd "start all" template unit file from an upstart script with multiple services?

Posted: 02 Jul 2021 09:01 AM PDT

I'm in the process of migrating all custom upstart scripts to systemd. I've come across a script that utilizes multiple services. I cannot figure out the proper syntax to handle this, or if I need to just create separate .service unit files for each. Is this possible for templating? The SystemD Unit Documentation doesn't give me much information, except for how to create a template file (appending @ to the name), and how to use %i to signify an instance.

The original upstart dealer-start-all.conf

console log  start on dealer-start  script      declare -a dealers=("TimeZone" "Timeout" "Inquiry" "Refuse")        for type in "${dealers[@]}"      do          if initctl list | grep "^dealer ($type)"          then              stop dealer type=$type          fi          start dealer type=$type          echo "dealer$type started"      done  end script  

The other part of it, dealer.conf, should be pretty cut and dry by using %i in the ExecStart portion, like:

ExecStart=/usr/bin/php -f /path/to/dealer%i.php

console log    instance $type    stop on dealer-stop    script          sudo -u root php -f /path/to/dealer$type.php  end script    post-stop script    if [ -z "$UPSTART_STOP_EVENTS" ]      then          echo "dealer$type stopped at `date +"%F %T.%N"` Run 'initctl emit dealer-stop' then 'initctl emit dealer-start' on `hostname` to get it running again." | mail -s "dealer$type Stopped" alerts@myemail.com      else          echo "dealer$type was manually stopped at `date +"%F %T"`."  fi  end script  

I just don't understand how to translate the array in the first one into a systemd version? Should I break these up into individual unit files? If so, then that's not a problem and can be easily done. I'm just unsure of syntax (if it exists) to do what the first one is doing.

Restricting swap usage for a systemd service in Ubuntu 18.04

Posted: 02 Jul 2021 10:17 AM PDT

I am trying to restrict the swap usage of a process using MemorySwapMax as mentioned in the doc with Ubuntu 18.04.

Environment

ubuntu@vrni-platform:/usr/lib/systemd/system$ uname -a  Linux vrni-platform 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux    ubuntu@vrni-platform:/usr/lib/systemd/system$ systemctl --version  systemd 237  +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid  

My systemd unit file looks like below

[Unit]  Description=My service  After=network.target  StartLimitIntervalSec=0  [Service]  Type=simple  Restart=always  RestartSec=1  User=support  MemoryMax=2000M  KillMode=process  MemoryAccounting=true  OOMScoreAdjust=1000  MemorySwapMax=0  ExecStart=/usr/bin/java -cp /home/support -XX:NativeMemoryTracking=summary -Xmx10000m MemoryConsumer 100 200 1  

I tried to disable swap for this process by specifying 0 for MemorySwapMax. But it seems there was some issue in systemd which is fixed in systemd 239.

So I also tried setting MemorySwapMax=1M. But that also seems to be not restricting the swap memory usage for this systemd service.

The documentation for MemorySwapMax states this

This setting is supported only if the unified control group hierarchy is used and disables MemoryLimit=.  

Can someone let me know how can I know if systemd is using unified control group hierarchy is being used in my setup or what else could be the problem which is not allowing MemorySwapMax to take effect?

EDIT

As mentioned in this answer I can see cgroup2 enabled

ubuntu@vrni-platform:/tmp/debraj$ sudo mount -t cgroup2 none /tmp/debraj  ubuntu@vrni-platform:/tmp/debraj$ ls -l /tmp/debraj/  total 0  -r--r--r--  1 root root 0 Jul  2 17:13 cgroup.controllers  -rw-r--r--  1 root root 0 Jul  2 17:13 cgroup.max.depth  -rw-r--r--  1 root root 0 Jul  2 17:13 cgroup.max.descendants  -rw-r--r--  1 root root 0 Jun 30 14:42 cgroup.procs  -r--r--r--  1 root root 0 Jul  2 17:13 cgroup.stat  -rw-r--r--  1 root root 0 Jul  2 17:13 cgroup.subtree_control  -rw-r--r--  1 root root 0 Jul  2 17:13 cgroup.threads  drwxr-xr-x  2 root root 0 Jun 30 14:42 init.scope  drwxr-xr-x 87 root root 0 Jul  2 15:05 system.slice  drwxr-xr-x  7 root root 0 Jun 30 15:22 user.slice  ubuntu@vrni-platform:/tmp/debraj$ sudo umount /tmp/debraj  

Does Windows 7 disable SMB when it resumes from sleep?

Posted: 02 Jul 2021 07:32 AM PDT

A Windows 7 (SP1) share that's mounted on my Linux box works fine until the Windows box goes to sleep and then resumes. But after that it inaccessible a "cannot access : Host is down" message. mount -a says the share is still mounted.

smbclient -L lists all the shares on the Windows box, followed by `"SMB1 disabled -- no workgroup available".

If I unmount and remount, I get:

mount.cifs kernel mount options: ip=xxx.xxx.x.xx,unc=\\xxx.xxx.x.xx\<dir>,vers=2.0,user=Dad,pass=********  mount error(2): No such file or directory  Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)  

dmesg shows the following error:

CIFS: Attempting to mount //xxx.xxx.x.xx/<dir>  CIFS: VFS: cifs_mount failed w/return code = -2  

Question: Does Windows 7 disable SMB when it resumes from sleep?

magento 2 installation on OCI oracle autonomous linux composer

Posted: 02 Jul 2021 09:44 AM PDT

I reviewed all the pages herewith the cloud installation guide but OCI installation is not mentioned properly.

     rpm -q al-config  al-config-1.1-1.el7.noarch  

and

     uname -a  Linux oci-magento 5.4.17-2102.202.5.el7uek.x86_64 #2 SMP Sat May 22 16:17:06 PDT 2021 x86_64 x86_64 x86_64 GNU/Linux     php -v  PHP 7.4.20 (cli) (built: Jun  3 2021 21:06:07) ( NTS )  Copyright (c) The PHP Group  Zend Engine v3.4.0, Copyright (c) Zend Technologies         httpd -v  Server version: Apache/2.4.6 ()  Server built:   Nov 10 2020 12:35:43    yum repolist  Loaded plugins: langpacks, ulninfo  repo id                         repo name                                 status  ol7_UEKR6/x86_64                Latest Unbreakable Enterprise Kernel Rele    323  ol7_addons/x86_64               Oracle Linux 7Server Add ons (x86_64)        499  ol7_developer_php74/x86_64      Oracle Linux 7Server PHP 7.4 Packages for    575  ol7_ksplice                     Ksplice for Oracle Linux 7Server (x86_64) 14,712  ol7_latest/x86_64               Oracle Linux 7Server Latest (x86_64)      22,772  ol7_oci_included/x86_64         Oracle Software for OCI users on Oracle L  1,118  ol7_optional_latest/x86_64      Oracle Linux 7Server Optional Latest (x86 16,318  ol7_software_collections/x86_64 Software Collection Library release 3.0 p 16,586  ol7_x86_64_userspace_ksplice    Ksplice aware userspace packages for Orac    540  repolist: 73,443  

https://docs.oracle.com/en/operating-systems/oracle-linux/scl-user/ol-scl-relnotes.html#section_zlg_m3g_dq

     php -m  [PHP Modules]  bz2  calendar  Core  ctype  curl  date  exif  fileinfo  filter  ftp  gettext  hash  iconv  libxml  openssl  pcntl  pcre  Phar  readline  Reflection  session  sockets  SPL  standard  tokenizer  zlib  

but I can not install :

 yum -y install php74u-pdo php74u-mysqlnd php74u-opcache php74u-xml php74u-gd php74u-devel php74u-mysql php74u-intl php74u-mbstring php74u-bcmath php74u-json php74u-iconv php74u-soap  

thanks sayantan

I am trying to deploy anthos on-prem but have issues with deploying the seesaw vm

Posted: 02 Jul 2021 09:14 AM PDT

I have deployed the admin workstation ok I am now trying to deploy the sessaw vm. Everytime I run sudo gkectl create loadbalancer --config admin-cluster.yaml I get the error below

Failed to parse supplied config file: IP block file required when using static IP mode

I have also included below the contents of my admin-seesaw-ipblock.yaml file

Just wondering if this is the correct syntax

blocks:    - netmask: "255.255.255.0"      gateway: "192.168.0.1"      ips:      - ip: "192.168.0.10"        hostname: "seesawadmin"               ...  

The documentation online for ip block files seems to contradict itself

Virtual Host With Proxypass

Posted: 02 Jul 2021 07:32 AM PDT

When I try to open URL tissue.example.com, it will show tissue.example.com/tissue/index.php (the requested URL was not found)

When I type tissue.example.com/index.php it will show the page but almost CSS file and image not showing. The image load with tissue.example.com/tissue/image.jpg

hopefully someone can help me

My virtual host config

<VirtualHost  *:80>    ProxyPreserveHost On  ProxyPass / http://192.168.1.10/tissue/  ProxyPassReverse / http://192.168.1.10/tissue/    ServerName tissue.example.com  </VirtualHost>  

Backup my webspace to NAS

Posted: 02 Jul 2021 09:31 AM PDT

First of all: I don't know if this is the correct place to ask, please tell me where I can go with this before you remove my post, that would be great, thank you!

What I want to do (and what my options are):

I got a synology NAS which can execute tasks that (yes I'm a noob) are "linux commands".My goal is to backup my whole webspace or a specific folder on it to the NAS, but only new or changed files (like it would work with git).

I can't use SSH keys (which would be the best way I assume) because I can't set them up correctly on my NAS (it is possible but I'm missing knowledge and even though I would appreciate if you help me with those, it's just too complicated for me, I read a bunch of stuff and it just doesn't work, so I try the way without SSH keys (at least this way I understand a little bit whats going on)).

So my pseudo code would be something like:

  1. Connect the NAS to the webspace
  2. Go to my specific folder (in my case the FTP login is already limited to only that folder, so we can skip that)
  3. Create a folder on my NAS / or navigate to it (it's already existing)
  4. Clone all the stuff from the webspace folder initially the first time
  5. gzip the whole folder and name the zip by date
  6. When executing again the script should only check if any files have been changed and only update files, download new ones or also remove old ones (so every of my zips would be a fully working webspace without any unnecessary files)
  7. So now my main folder is up to date with the webspace and get zipped again

What I currently have:

lftp -u MY-FTP-USERNAME,MY-FTP-PASSWORD MY-WEBSPACE-URL 'mirror /test'    tar -zcvf /volume1/BACKUPS/backup-$(date +%Y-%m-%d-%H-%M-%S).tar.gz /volume1/BACKUPS/MY-WEBSPACE-NAME/    rm -rf /volume1/BACKUPS/MY-WEBSPACE-NAME/  

Some problems with that:

  1. It downloads the whole webspace every time, because I couldn't make that "only new files" thing to work. The filesize is not the problem, but these are so many small files, it takes a really long time and just blocks the resources of the NAS
  2. For some reason the gzip when unzipped contains the whole path /volume1/BACKUPS/MY-WEBSPACE-NAME/ and only in the last folder are my files. I just want the MY-WEBSPACE-NAME folder with my files inside to be zipped.

I would really much appreciate if you could help me with this. It doesn't have to be lftp, I also tried wget but that also didn't work. So anything thats work, just go for it. It's a little time ago I was working on this the last time, but if I remember correctly I can't use git but I don't know why anymore

How is the age for public folder calendar, task and contacts determined?

Posted: 02 Jul 2021 08:42 AM PDT

We are running Exchange 2016 on premise and the upper management wants to set an age limit for public folders. We have a lot of public folders with tasks, contacts and calendar items. How is the age limit (retention age) determined for these types of public folder items? There is a lot of information on how retention age is calculated for mailboxes but not for public folders.

Creating a LoggingServiceV2Client with custom credentials

Posted: 02 Jul 2021 07:33 AM PDT

I've been working on a C# desktop app that should be able to connect to a GCP account and pull/query the Logs there. I want this to be usable on computer, or to be able to access different accounts, so I don't want to rely on the Private key stored on the machine. Instead, I'm going to pass in an access token manually. This means that I cannot use the default "LoggingServiceV2Client.Create()" route to build the client. Instead, I'll need to use the "LoggingServiceV2ClientBuilder" class. However, I am having a lot of trouble setting the client up correctly, and have yet to get it to return a single successful query.

What I am using so far is:

           string cc = "super secret access token";                var cred = GoogleCredential.FromAccessToken(cc);                cred = cred.CreateScoped(LoggingServiceV2Client.DefaultScopes);                  var client = new LoggingServiceV2ClientBuilder { ChannelCredentials = cred.ToChannelCredentials()}.Build();  

Slow performance for Kubernetes Ingress HTTPS load balancer on GCP

Posted: 02 Jul 2021 10:08 AM PDT

On GCP I setup a Wordpress workload on an auto-pilot cluster, then exposed the Wordpress through a TCP service, and finally setup an Ingress https load balancer. I used a Google managed certificate for the https connection.

However when I connect the https IP, the response is very slow, and all the JS and CSS cannot be loaded.

Then I tried create another Ingress controller but this time I used only http, switching back to plain http makes everything normal, and the site can be loaded successfully in a reasonable time.

May I know how can I fix the https connection's problem?

Google Cloud Monitoring storage dashboard not showing Object Count or Object size for bucket

Posted: 02 Jul 2021 07:32 AM PDT

I'm trying to see Object Count and Object Size in the Cloud Monitoring dashboard for Cloud Storage. For some buckets, the Object Count and Object size data are not populating. All I'm seeing is "No data is available for the selected time frame".

I've tried different time frames and have waited 24 hours for data to show up. Other buckets in the same project have object count and object size data.

screen shot of Object Count missing data

Ansible is it possible to use variable in template src

Posted: 02 Jul 2021 08:04 AM PDT

In ansible we are trying to access different templates based on variable.

We have following template files like:

templates      app1.conf.j2      app2.conf.j2      app3.conf.j2    taks      app.yml  

In tasks we need to copy template file based on the app name. for eg: we will specify a variable named "instance_name" to either app1 or app2 or app3.

Now based on the variable we need to copy the app file to /opt/(( instance_name }}/conf.d/.

we created ansbile task as follows but its not working.

- name: 'Copy {{ instance_name }} file to /opt/conf.d/ Directory'      template:        src: "{{ instance_name }}.conf.j2"        dest: "/opt/{{ instance_name }}/conf.d/"        owner: root        group: root        mode: 0644  

     

When we hard code "src" to app1.conf.j2 its working for app1.

From this url https://docs.ansible.com/ansible/latest/modules/template_module.html#parameter-src it specifies value can be a relative or an absolute path.

Please let us know is it possible with this method? We are having around 20 apps and whats the best method to simplify the ansible playbook to specify only the variable.

Prometheus: scrape interval is 1m, but resolution is still 15s

Posted: 02 Jul 2021 10:08 AM PDT

tl;dr: My scrape interval is 1m, yet I have a 15s resolution. Why?


My prometheus configuration includes a job to scrape kong metrics:

- job_name: kong_blue    honor_timestamps: true    scrape_interval: 1m    scrape_timeout: 10s    metrics_path: /metrics    scheme: https    dns_sd_configs:    - names:      - ...  

Consequently, when on the targets tab in the HTML interface, we can see that kong is scraped roughly every minute, as expected.

However, when I query that data, for example kong_http_status, prometheus indicates a resolution of 14s. And indeed, the graph also shows one value "tick" every 15 seconds.

kong_http_status query in prometheus, showing a 14s resolution

Why is my resolution 15s?

How to install the brotli nginx module properly on debian

Posted: 02 Jul 2021 08:04 AM PDT

I'm trying to setup brotly compression on a nginx/1.10.3 server running on Debian 9.5 Stretch / Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u5 (on a lightsail instance). I used the following commands to try to install it:

$ sudo apt-add-repository -y ppa:hda-me/nginx-stable  $ sudo apt-get update  $ sudo apt-get install brotli nginx nginx-module-brotli  

Unfortunatly the first one fails:

gpg: keybox '/tmp/tmpwhmks25f/pubring.gpg' created  gpg: /tmp/tmpwhmks25f/trustdb.gpg: trustdb created  gpg: key 1F5EB010C5341279: public key "Launchpad PPA for hda_launchpad" imported  gpg: Total number processed: 1  gpg:               imported: 1  gpg: no valid OpenPGP data found.  

which causes that the package cannot be installed, so running the last command (after suto apt-get update) ends with that the package can't be found:

Reading package lists... Done  Building dependency tree  Reading state information... Done  E: Unable to locate package nginx-module-brotli  

I'ved looked up several docs but there are only infos about installing it on CentOS or ubuntu.

Azure Log Analytics 'where' operator: Failed to resolve table or column expression named 'SecurityEvent'

Posted: 02 Jul 2021 08:42 AM PDT

Whenever I attempt to run the following Log Analytic query in Azure Log Analytics I get the following error:

'where' operator: Failed to resolve table or column expression named 'SecurityEvent'

I think it's because I need to enable SecurityEvent in Log Analytics but I'm not sure. I was wondering if someone could provide a guide;

SecurityEvent   | where AccountType == "User" and EventID == 4625 and TimeGenerated > ago(6h)   | summarize IPCount = dcount(IpAddress), makeset(IpAddress)  by Account  | where IPCount > 5  | sort by IPCount desc  

Unable to access internet on pod in private GKE cluster

Posted: 02 Jul 2021 09:49 AM PDT

I'm currently unable to access/ping/connect to any service outside of Google from my private Kubernetes cluster. The pods are running Alpine linux.

Routing Tables

/sleepez/api # ip route show table all  default via 10.52.1.1 dev eth0  10.52.1.0/24 dev eth0 scope link  src 10.52.1.4  broadcast 10.52.1.0 dev eth0 table local scope link  src 10.52.1.4  local 10.52.1.4 dev eth0 table local scope host  src 10.52.1.4  broadcast 10.52.1.255 dev eth0 table local scope link  src 10.52.1.4  broadcast 127.0.0.0 dev lo table local scope link  src 127.0.0.1  local 127.0.0.0/8 dev lo table local scope host  src 127.0.0.1  local 127.0.0.1 dev lo table local scope host  src 127.0.0.1  broadcast 127.255.255.255 dev lo table local scope link  src 127.0.0.1  local ::1 dev lo  metric 0  local fe80::ac29:afff:fea1:9357 dev lo  metric 0  fe80::/64 dev eth0  metric 256  ff00::/8 dev eth0  metric 256  unreachable default dev lo  metric -1  error -101  

The pod certainly has an assigned IP and has no problem connecting to it's gateway:

PS C:\...\> kubectl get pods -o wide -n si-dev  NAME                              READY     STATUS    RESTARTS   AGE       IP          NODE  sleep-intel-api-79bf57bd9-c4l8d   1/1       Running   0          52m       10.52.1.4   gke-sez-production-default-pool-74b75ebc-6787  

ip addr output

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever      inet6 ::1/128 scope host         valid_lft forever preferred_lft forever  3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1460 qdisc noqueue state UP      link/ether 0a:58:0a:34:01:04 brd ff:ff:ff:ff:ff:ff      inet 10.52.1.4/24 scope global eth0         valid_lft forever preferred_lft forever      inet6 fe80::ac29:afff:fea1:9357/64 scope link         valid_lft forever preferred_lft forever  

Pinging Gateway Works

/sleepez/api # ping 10.52.1.1  PING 10.52.1.1 (10.52.1.1): 56 data bytes  64 bytes from 10.52.1.1: seq=0 ttl=64 time=0.111 ms  64 bytes from 10.52.1.1: seq=1 ttl=64 time=0.148 ms  64 bytes from 10.52.1.1: seq=2 ttl=64 time=0.137 ms  ^C  --- 10.52.1.1 ping statistics ---  3 packets transmitted, 3 packets received, 0% packet loss  round-trip min/avg/max = 0.111/0.132/0.148 ms  

Pinging 1.1.1.1 Fails

/sleepez/api # ping 1.1.1.1  PING 1.1.1.1 (1.1.1.1): 56 data bytes  ^C  --- 1.1.1.1 ping statistics ---  6 packets transmitted, 0 packets received, 100% packet loss  

System Services Status

PS C:\...\> kubectl get deploy -n kube-system  NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE  event-exporter-v0.1.7   1         1         1            1           18m  heapster-v1.4.3         1         1         1            1           18m  kube-dns                2         2         2            2           18m  kube-dns-autoscaler     1         1         1            1           18m  l7-default-backend      1         1         1            1           18m  tiller-deploy           1         1         1            1           14m  

Traceroute (Google Internal)

/sleepez/api # traceroute -In 74.125.69.105   1  10.52.1.1  0.007 ms  0.006 ms  0.006 ms   2  *  *  *   3  *  *  *   4  *  *  

Traceroute (External)

traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 46 byte packets   1  10.52.1.1  0.009 ms  0.003 ms  0.004 ms   2  *  *  *   3  *  *  *   [continues...]  

Proper way to override Mysql my.cnf on CentOS/RHEL?

Posted: 02 Jul 2021 10:03 AM PDT

Context: I'm porting an opensource server software (and writing associated documentation) from Debian/Ubuntu to CentOS/RHEL.

For the software to run correctly, I need to add a dozen of specific parameters to Mysql configuration (example: increase max_allowed_packet).

From a Debian point of view, I known I can override Mysql's my.cnf by adding a file to /etc/mysql.d, say /etc/mysql.d/my-software.cnf.

My question is: how to do the same correctly on CentOS/RHEL ?

Other infos:

  • I know where mysqld looks for its configuration file thanks to https://dev.mysql.com/doc/refman/5.7/en/option-files.html. But, for CentOS, I don't understand:
    • how NOT to directly edit /etc/my.cnf (that may not be package-update-proof)
    • where to add my specific Mysql parameters
  • Reading the CentOS Mysql init script (/etc/init.d/mysql), I've seen that a /etc/sysconfig/mysqld is sourced, but I don't know how to add configuration parameters.
  • I've search for combinations of override / my.cnf / centos on ServerFault, StackOverflow and also DBA.StackExchange, but found nothing relevant.
  • I make all the tests within a "centos:6" Docker container
  • the software is Asqatasun https://github.com/Asqatasun/Asqatasun

Error during openssl s_client connection, SSL alert number 48

Posted: 02 Jul 2021 08:11 AM PDT

I am attempting to connect to a third party via CURL/PHP mainly, but since it doesn't work, am resorting to more verbose tools to diagnose the problem.

If I try the following, on Ubuntu 14.04 LTS:

openssl s_client -showcerts -connect secure.thirdpartyhost.com:443 -cert production_client.pem -key production_key.pem -CApath /etc/ssl/certs  

It fails with this error:

CONNECTED(00000003)  depth=2 C = US, O = "Entrust, Inc.", OU = See www.entrust.net/legal-terms, OU = "(c) 2009 Entrust, Inc. - for authorized use only", CN = Entrust Root Certification Authority - G2  verify return:1  depth=1 C = US, O = "Entrust, Inc.", OU = See www.entrust.net/legal-terms, OU = "(c) 2012 Entrust, Inc. - for authorized use only", CN = Entrust Certification Authority - L1K  verify return:1  depth=0 C = CA, ST = New York, L = New York, O = ThirdParty, CN = *.thirdpartyhost.com  verify return:1  139647498331808:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:s3_pkt.c:1262:SSL alert number 48  139647498331808:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177:  

Is that their server signaling the error? That the error with the CA is occurring during their verification?

Thanks for your help. A mere developer, I appreciate the help of those wiser!

Proper way to deal with corrupt XFS filesystems

Posted: 02 Jul 2021 07:32 AM PDT

I recently had an XFS filesystem become corrupt due to a powerfail. (CentOS 7 system). The system wouldn't boot properly.

I booted from a rescue cd and tried xfs_repair, it told me to mount the partition to deal with the log.

I mounted the partition, and did an ls to verify that yes, it appears to be there. I unmounted the partition and tried xfs_repair again and got the same message.

What am I supposed to do in this situation? Is there something wrong with my rescue cd (System Rescue CD, version 4.7.1)? Is there some other procedure I should have used?

I ended up simply restoring the system from backups (it was quick and easy in this case), but I'd like to know what to do in the future.

Exposing service in Kubernetes

Posted: 02 Jul 2021 08:42 AM PDT

I'm new in Kubernetes and there are some doubts I have. I have setup a Kubernetes cluster that consists of one master/node and one node. I have deployed a very simple NodeJS-based app, using Deployment kind with 2 replicas. Then, I have exposed it as a service by kubectl expose deployment my-app --port=80.

Now, my services looks like:

root@sw-kubernetes01:~# kubectl get services  NAME             CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE  my-app           192.168.100.167   <none>        80/TCP    10m  kubernetes       192.168.100.1     <none>        443/TCP   1h  

Is it supposed that I should access to my app navigating to http://192.168.100.167:? I'm getting a timeout error. Otherwise, how can I get the external IP to access to the service externally?

I know that if I declare the service as type: NodePort, I can access my app using nodes IP, doesn't exist a way to auto-balance the load between pods?

Ubuntu 14.04 blk_update_request I/O error on same sector across all drives with ZFS

Posted: 02 Jul 2021 10:03 AM PDT

I'm running Ubuntu 14.04 with ZOL version v0.6.5.4:

root@box ~# dmesg | egrep "SPL|ZFS"  [   34.430404] SPL: Loaded module v0.6.5.4-1~trusty  [   34.475743] ZFS: Loaded module v0.6.5.4-1~trusty, ZFS pool version 5000, ZFS filesystem version 5  

ZFS is configured in raidz2 across 6x 2TB Seagate SpinPoint M9T 2.5" drives, with a read cache, deduplication and compression enabled:

root@box ~# zpool status -v    pool: bigpool   state: ONLINE  config:            NAME                                           STATE     READ WRITE CKSUM          bigpool                                        ONLINE       0     0     0            raidz2-0                                     ONLINE       0     0     0              ata-ST2000LM003_HN-M201RAD_S37<redactedid> ONLINE       0     0     0              ata-ST2000LM003_HN-M201RAD_S37<redactedid> ONLINE       0     0     0              ata-ST2000LM003_HN-M201RAD_S37<redactedid> ONLINE       0     0     0              ata-ST2000LM003_HN-M201RAD_S37<redactedid> ONLINE       0     0     0              ata-ST2000LM003_HN-M201RAD_S37<redactedid> ONLINE       0     0     0              ata-ST2000LM003_HN-M201RAD_S34<redactedid> ONLINE       0     0     0          cache            sda3                                         ONLINE       0     0     0  

Every few days, the box will lock up, and I'll get errors such as:

blk_update_request: I/O Error, dev sdh, sector 764218200  blk_update_request: I/O Error, dev sdf, sector 764218200  blk_update_request: I/O Error, dev sde, sector 764218200  blk_update_request: I/O Error, dev sdd, sector 764218200  blk_update_request: I/O Error, dev sdc, sector 764218432  blk_update_request: I/O Error, dev sdg, sector 764218200  

smartctl shows that the disks are not recording any SMART errors, and they're all fairly new disks. I find it odd too that they're all failing on the same sector (with the exception of sdc). I was able to grab a screenshot of the terminal (I can't ssh in once the errors start):

console errors

Perhaps this is a controller failing, or a bug related to zfs?

SSL on IIS8.5 - Working with named URL, but localhost results in ERR_CERT_COMMON_NAME_INVALID

Posted: 02 Jul 2021 07:46 AM PDT

I have IIS8.5 running on Win Server 2K12 R2. I have a valid SSL certificate registered to server's name foo.domain.com:

enter image description here

I have configured my website's bindings to use https with this certificate:

enter image description here

I am able to talk successfully to the website when talking to https://foo.domain.com, but I am unable to talk successfully when using https://localhost.com or https://127.0.0.1:

enter image description here enter image description here

What do I need to do to be able to communicate successfully over localhost?

I have tried:

  • Creating a self-signed certificate and attempted to use that, but I can't use two certificates for the same website. Using a self-signed for localhost disables my ability to communicate via foo.domain.com

I have not:

  • Tried to apply intermediate COMODO certificates manually through mmc.exe certmgr.msc. Since my current setup is working externally, I do not believe this is the issue.
  • Modified hosts file to redirect localhost to foo.domain.com

How can i print to an alternate LPD port from Windows Server 2012?

Posted: 02 Jul 2021 09:06 AM PDT

I have a Mac setup using LPD to a remote printer/port and it works great. I'm trying to add the same printer on a Windows server and it fails.

I've tried standard TCP/IP port specifying the IP as 9.3.3.3:1234 and also LPR Port. With Standard TCP I've also removed the port and configured as raw with the alternate port #.

I've got windows firewall set to allow anything outgoing to port 1234.

What am I doing wrong?

Ngnix - how can I send 503 for a particular upstream?

Posted: 02 Jul 2021 10:07 AM PDT

I am using ngnix to route traffic to proper application servers based on a cookie value. So one user always lands in a particular uptstream server.

Now I have multiple such uptsream servers. I want to send 503 for a upstream server when I am taking it down for maintenance purpose. What is the simplest way to do it?

If the application server is crashed we should get normal "could not connect to backend" error. So, I should get 503 for a upstream only when I am taking it down intentionally.

Does JBoss (or tomcat) log 503 errors to the access log

Posted: 02 Jul 2021 09:06 AM PDT

I've enabled the access log in JBoss. I see that it logs 404's, but will it log 503 errors as well?

Thanks!

Is Macports as good or as bad as I get the impression? [closed]

Posted: 02 Jul 2021 10:07 AM PDT

Among Linuxes, keeping up-to-date with MacPorts struck me as being most like Gentoo (arguably the least Mac-like entry on the shortlist of major Linux distributions). But after further experience it seems not to be exactly like Gentoo: with Gentoo, things break regularly, but you can often find a solution by Googling salient portions of an error message, and unlike computer situations in general it makes quite rational sense to try again 24 or 48 hours later if something is broken. MacPorts in this regard seems only like Gentoo in that you can get breakage by trying to keep your system up-to-date as intended.

Earlier breakage had me stumped about how to install Django; now I have Django installed, but its breaking on upgrading glib1; the last substantive change on the bug (http://trac.macports.org/ticket/21413) was about a year ago.

Is MacPorts really "Breaks like Gentoo but you can't fix it like Gentoo", or does it say "32 bit? Legacy! Ewww!" or something else? I'd like to know what a sane basic perspective is, and what I should and shouldn't expect of MacPorts. (Or if I've answered my own question in what I've said above.)

No comments:

Post a Comment