Tuesday, June 8, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Tomcat Web Application Manager: Log files for deployed *.war files

Posted: 08 Jun 2021 04:05 AM PDT

i am auditing a tomcat server for the first time and could not find a sufficient solution for my problem. I guess i am here on the right platform for my question. The client uses Tomcat Web Application Manager for deploying *.war files. For sampling reasons i need a reliable list of all deployed war-files in a specific time period. Is there a log which contains all deployments or is there a log file for each deployment?

I am really thankful for your advises in advance. Oliver

Logrotate Servce Failure

Posted: 08 Jun 2021 02:35 AM PDT

My logrotate service failes. It complains about a duplicate entry for modsecurity.

 ● logrotate.service - Rotate log files         Loaded: loaded (/lib/systemd/system/logrotate.service; static; vendor preset: enabled)         Active: failed (Result: exit-code) since Tue 2021-06-08 14:22:07 CST; 2h 54min ago           Docs: man:logrotate(8)                 man:logrotate.conf(5)       Main PID: 15370 (code=exited, status=1/FAILURE)            Jun 08 14:22:07 server1.example.com systemd[1]: Starting Rotate log files...      Jun 08 14:22:07 server1.example.com logrotate[15370]: error: modsecurity:1 duplicate log entry for /var/log/apache2/modsec_audit.log      Jun 08 14:22:07 server1.example.com logrotate[15370]: error: found error in file modsecurity, skipping      Jun 08 14:22:07 server1.example.com systemd[1]: logrotate.service: Main process exited, code=exited, status=1/FAILURE      Jun 08 14:22:07 server1.example.com systemd[1]: logrotate.service: Failed with result 'exit-code'.      Jun 08 14:22:07 server1.example.com systemd[1]: Failed to start Rotate log files.  

However, /etc/logrotate.d/modsecurity doesn't contain any duplicates:

/var/log/apache2/modsec_audit.log  {          rotate 14          daily          missingok          compress          delaycompress          notifempty  }  

Any thought?

Shouldn't dhclient confirm its lease after a carrier loss?

Posted: 08 Jun 2021 04:12 AM PDT

I'm investigating an issue with a Ubuntu server acting as the Internet gateway for a branch office stopped working after being switched to a new Internet connection.

The IP address on the external interface is acquired from the ISP router via DHCP. When the interface was disconnected from the old line and connected to the new one, there was no Internet connectivity. Checking the log, I found that the Ethernet link on the interface had come up but was still trying to use the IP address it had been using on the old line. There weren't any dhclient log entries after the Link is Up message.

The way I understand DHCP, if the link on an Ethernet interface is going down and up again, the machine should assume that it might have been moved to a different network segment, and consequently try to reacquire its IP address before using it again.

Am I wrong?

Container killed by oom-killer during docker update

Posted: 08 Jun 2021 02:07 AM PDT

I use docker to host several web services. There are mainly composed with Apache/nginx, PHP and a MySQL database.

Currently, I have 38 containers. If I use docker stats, I see a total of ~4GiB (/ 15.63GiB) used.

free command show the following and confirm the ram used by docker containers :

              total        used        free      shared  buff/cache   available  Mem:           15Gi       4,0Gi       226Mi       447Mi        11Gi        10Gi  Swap:         7,8Gi       622Mi       7,2Gi  

During the last maintenance, I noticed an available update from 20.10.6 to 20.10.7 and updated. After the update, ~50% of the containers do not restart with error Exited 137 oom-kill. The containers have the "unless-stopped" restart policy.

During the update the ram available was Ok and clearly show when containers were killed :

RAM space available

In a similar situation (stop/start docker), a reboot, the issue is not present.

I would like to know why these containers were killed with oom-killer ? Should I take care of "free" RAM instead of "available" RAM ? How can I avoid this situation in the future ?

dracut-install: ERROR: installing 'sgdisk' dracut: FAILED

Posted: 08 Jun 2021 02:01 AM PDT

When i do a Centos 8 routine update from below command.

dnf update  

I got a below error, i have no idea on how to fix this, Any guidance on how to fix this will very valuable.

  Running scriptlet: kernel-core-4.18.0-305.3.1.el8.x86_64                                                                                                                            528/529  dracut-install: ERROR: installing 'sgdisk'  dracut: FAILED: /usr/lib/dracut/dracut-install -D /var/tmp/dracut.RTu0To/initramfs -a sgdisk  

Strange hex-code in Windows root certificate prevents trust in Thunderbird

Posted: 08 Jun 2021 01:24 AM PDT

We are using Windows CA for S/MIME certificates and in order for this to work with external recipients, we routinely exchange signed mails in order to establish trust or sometimes transmit our root CA in particular when multiple interlnal users are needed. Now, I face a problem with an external recipient not being able to establish this in a straighhtforward manner (they are using Thunderbird). The cause seems to be a strange issue I can observe with the certificates:

  • The Certificate for "user@domain" is Issued By "Name Of Our Internal Root CA"
  • If I follow the (internally working!) certificate chain, the name of the signing CA cert is shown as "Name Of Our Internal Root CA", but looking at the details, it says Issued By and Issued For "CN = Name Of Our Internal Root CA d1007899-9f27-4a7b-95e3-6d1a7f985a37, DC = ...", i.e., with some weird hex-code added to the common name field.

Since they are a long-term contact, they already had an older root CA cert of ours in their trust store. That one seems to have had the "correct" name in it and worked, but is of course long expired. On the other hand, that difference between names seems to be what prevents correct installation of our current root CA cert ...

Q: How can it be that our current user certs show this difference between issuer specified in the cert itself and the name in the actual CA cert (and internally, no system complains abot this difference)? What can I do in order to correct this problem (preferably with all existing user certs, but perhaps only for all newly created certs)?

Setup a reverse proxy and web server at the same time with Nginx

Posted: 08 Jun 2021 01:14 AM PDT

I am trying to configure Nginx to do the reverse proxy and web service function at the same time.

Different services with Nginx

From what I understood from the documentation, it could be something similar to this:

server {       listen       80;       server_name  www.example1.com example1.com;       location / {            proxy_pass "http://localhost:8080" ;       }  }      server {       listen       80;       server_name  *.example2.com;       root /var/www/example2/public_html/         location / {            try_files $uri $uri/ =4040;       }  }  

However, when I try to access example1, it is sending me to the root of example2.

What could I be doing wrong

Thank you in advance.

Timezone for Google analytics tables and update timing for yesterday table log

Posted: 08 Jun 2021 01:09 AM PDT

We are working to utilize the GA log in GCP, but we are not so sure about the time zone for GA log tables and when/ what time the table logs are automatically updated everyday. Please share with us any ideas for this! Thank you!

Optimise WSUS for work-from-home users

Posted: 08 Jun 2021 01:08 AM PDT

Good day

We self-host our infrastructure. Most users work from home using poor Internet connections. Using SSL VPN to connect to the Main Office to access their network resources, WSUS is one of those.

Besides MS E5 and third-party options, is there a well documented method of getting over 150 users up to date via WSUS remotely.

I have approved Security and Critical updates, but from the Dashboard there are still computers either not signed in for the past 30 days or updates needed by computers.

We do not have the resources to add a secondary WSUS. I did read up on some of the similar posts. It is close to my situation but we need a practical way of ensuring reliable updates that can be approved by our WSUS.

Would a GPO be sufficient? Can I accomplish this with 1 WSUS and no additional servers?

451 4.3.0 Temporary lookup failure - iRedMail - Ubuntu Server

Posted: 08 Jun 2021 01:04 AM PDT

I'm new to this area and I configured an ubuntu server with iRedMail. It receives from gmail.com but when trying to send emails using PHPMailer (SMTP) I get the following error. Tried to figure out what is wrong by referring to Google resources. But could not find an answer. Please help.

SMTP ERROR: RCPT TO command failed: 451 4.3.0 ***@sample.co: Temporary lookup failure

Jun  8 07:57:58 mail postfix/postqueue[29693]: warning: /etc/postfix/main.cf, line 295: overriding earlier entry: sender_dependent_relayhost_maps=hash:/etc/postfix/sen>  Jun  8 07:58:08 mail postfix/smtpd[29707]: warning: /etc/postfix/main.cf, line 295: overriding earlier entry: sender_dependent_relayhost_maps=hash:/etc/postfix/sender_>  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: warning: database /etc/postfix/aliases.db is older than source file /etc/postfix/aliases  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: error: open database /etc/postfix/sender_canonical.db: No such file or directory  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: connect from hera.webserver.lk[138.128.174.10]  Jun  8 07:58:08 mail postfix/anvil[29708]: warning: /etc/postfix/main.cf, line 295: overriding earlier entry: sender_dependent_relayhost_maps=hash:/etc/postfix/sender_>  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: Anonymous TLS connection established from hera.webserver.lk[138.128.174.10]: TLSv1.3 with cipher TLS_AES_256_GCM_>  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: warning: hash:/etc/postfix/sender_canonical is unavailable. open database /etc/postfix/sender_canonical.db: No su>  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: warning: hash:/etc/postfix/sender_canonical lookup error for "***@sample.co"  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: NOQUEUE: reject: RCPT from hera.webserver.lk[138.128.174.10]: 451 4.3.0 <***@sample.co>: Temporary lookup>  Jun  8 07:58:08 mail postfix/postqueue[29711]: warning: /etc/postfix/main.cf, line 295: overriding earlier entry: sender_dependent_relayhost_maps=hash:/etc/postfix/sen>  Jun  8 07:58:08 mail postfix/submission/smtpd[29707]: disconnect from hera.webserver.lk[138.128.174.10] ehlo=2 starttls=1 auth=1 mail=1 rcpt=0/1 quit=1 commands=6/7  

How can I point my domain to Google for a static website?

Posted: 08 Jun 2021 01:03 AM PDT

How can I point my domain to Google for a static website? I have tried using the A record that I found from "Load Balancing." But it still doesn't work.

Pinging only names with zone name - BIND DNS Server

Posted: 08 Jun 2021 01:01 AM PDT

I try configure BIND DNS Server on pfsense. I have record 'pfsensetest' and zone name 'test.net'. I can ping 'pfsensetest.test.net', but can't ping 'pfsensetest'.

Image

velero - Storage location ownership \ additional permissions (aws s3)

Posted: 08 Jun 2021 12:24 AM PDT

We configure our velero backups (on account A) to store the backups on a second trusted account (account B) s3 bucket. The backups have the ownership of the velero account (account A) and is the only account who has permissions to the backups.

As of version 1.6, it is possible to use secrets to provide unique credentials for the storage provider. Is it possible to allow for the use of a different arn role instead of a credential file?

alternatively (and preferably) is it possible to configure velero to automatically add permissions or change the ownership of the backup to the bucket owner (account B)?

Thanks for the help :)

Linux partition sizes for a locally-deployed (via docker) django web application with postgresql database

Posted: 08 Jun 2021 12:23 AM PDT

I plan to deploy in an ubuntu server, via docker, a django web application with postgresql database in a local machine within a local network. I believe that docker volumes live in the /var/lib/docker/volumes directory of the host computer. My question is: I suppose this means that when I prepare the host machine, I will have to allot a large portion to the root directory? Are there ideal sizes (e.g. in terms of percentage of disk space) for the partitions of the host machine?

This may sound trivial, but I would like to get confirmation before I start working on the deployment. Thanks so much.

Attach network disk as direct

Posted: 08 Jun 2021 12:03 AM PDT

Is where way to attach network disk as direct without creating VHD etc? I want to connect drive directly from one computer to other by network by like iSCSI but without creating VHD.

Not acceptable:

  1. Drive Bender
  2. iSCSI (it is creating VHD drive)
  3. Mapping as network drive (direct only like iSCSI but without creating VHD)
  4. CLI-only linux
  5. FreeNAS, TrueNAS, XigmaNAS, UNRAID, XPEnologi etc

Highly likely Windows/Windows Server program

How to fix ws and socket.io memory leak?

Posted: 08 Jun 2021 03:59 AM PDT

I have read that there is a memory leak occurring in both node.js websocket modules ws and socket.io. It has existed for years and am wondering how to fix it.

It is mentioned in the following, to name a few:

Given it is now 2021, is setting the perMessageDeflate key to false and perhaps preloading jemalloc still the best solution?

Openstack not able to connect port 5000

Posted: 08 Jun 2021 12:25 AM PDT

I recently started working on openstack-train(centos7). I created 2 nodes on virtualbox which are on subnet 192.168.56.0/24. I read and followed the official train docs and encountered issue when i was installing keystone module. When i run any openstack command like openstack network list i get this error.

Failed to discover available identity versions when contacting http://node1:5000/v3. Attempting to parse version from URL. Unable to establish connection to http://node1:5000/v3/auth/tokens : HTTPConnectionPool(host='node1', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7492466650>: Failed to establish new connection: [Errno 113] No route to host',))

PS: node1 is the controller node has 192.168.56.2 static ip and installed keystone on it.

The procedures i have taken:

  • Did the environment config.
  • I reinstalled keystone and checked every step on the docs.
  • Added node1 192.168.56.1 on /etc/hosts and can be pinged and connect with apache server on 80.
  • Disabled firewall and tried to connect to the node1:5000/v3 but no luck.

I was on this step: https://docs.openstack.org/keystone/train/install/keystone-users-rdo.html

I have 50 servers. Want to update /etc/hosts file using ansible [closed]

Posted: 08 Jun 2021 12:03 AM PDT

I would like to update /etc/hosts file using ansible playbook to all my 50 servers.

<ipaddress>     <fqdn>     <hostname>  

Extra ip rule table breaking my connectivity with container

Posted: 08 Jun 2021 01:22 AM PDT

I have host with one interface that has the ip 10.0.10.5/28. The host has one container with the interface cali02ad7e68ce1 and ip 10.42.1.2/26. This is the main routing table of the host:

$> ip r list table main  default via 10.0.10.1 dev eth0 proto dhcp metric 100   10.0.10.0/28 dev eth0 proto kernel scope link src 10.0.10.5 metric 100   10.42.1.2 dev cali02ad7e68ce1 scope link   

This is the list of ip rules:

$> ip rule  0:  from all lookup local  30400:  from 10.0.10.5 lookup 30400  32766:  from all lookup main  32767:  from all lookup default  

And this is the routing table 30400:

$> ip r list table 30400  default via 10.0.10.1 dev eth0 proto static metric 10   10.0.10.1 dev eth0 proto static scope link metric 10   

When I try to ping the container ping 10.42.1.2, I receive no packets. However, if I tcpdump on the container's interface, I can see both echo request and echo reply.

$> sudo tcpdump -eni cali02ad7e68ce1  dropped privs to tcpdump  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode  listening on cali02ad7e68ce1, link-type EN10MB (Ethernet), capture size 262144 bytes  15:57:24.589384 ee:ee:ee:ee:ee:ee > 4e:b1:cd:f0:62:82, ethertype IPv4 (0x0800), length 98: 10.0.10.5 > 10.42.1.2: ICMP echo request, id 34667, seq 1, length 64  15:57:24.589405 4e:b1:cd:f0:62:82 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.42.1.2 > 10.0.10.5: ICMP echo reply, id 34667, seq 1, length 64  15:57:25.637186 ee:ee:ee:ee:ee:ee > 4e:b1:cd:f0:62:82, ethertype IPv4 (0x0800), length 98: 10.0.10.5 > 10.42.1.2: ICMP echo request, id 34667, seq 2, length 64  15:57:25.637216 4e:b1:cd:f0:62:82 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.42.1.2 > 10.0.10.5: ICMP echo reply, id 34667, seq 2, length 64  

As soon as I delete the rule 30400, ping works. I am confused because I don't understand how that rule makes the echo reply to never reach the ping process. AFAIK, that rule should only apply when 10.0.10.5 is source ip. Any help or guess will be appreciated!

UPDATE

Adding bridge info as requested in the comments:

$> ip -br link; ip -br address`  lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP>   eth0             UP             06:e4:85:e5:1b:94 <BROADCAST,MULTICAST,UP,LOWER_UP>   cali02ad7e68ce1@if3 UP             ee:ee:ee:ee:ee:ee <BROADCAST,MULTICAST,UP,LOWER_UP>   lo               UNKNOWN        127.0.0.1/8 ::1/128   eth0             UP             10.0.10.5/28   cali02ad7e68ce1@if3 UP             fe80::ecee:eeff:feee:eeee/64     $> ip -br link show type bridge  $> ip -br link show type bridge_slave  $>   

How to mount disks based on a ansible facts ansible_devices ids?

Posted: 08 Jun 2021 01:39 AM PDT

i'm trying to mount disks when a ansible device correspond a specific ids name: google-pgdata

"sdc": {              "holders": [],              "host": "",              "links": {                  "ids": [                      "google-pgdata",                      "scsi-0Google_PersistentDisk_pgdata"                  ],                  "labels": [],                  "masters": [],                  "uuids": []              },              "model": "PersistentDisk",              "partitions": {},              "removable": "0",              "rotational": "1",              "sas_address": null,              "sas_device_handle": null,              "scheduler_mode": "noop",              "sectors": "209715200",  

Connect to remote minikube cluster using kubectl

Posted: 08 Jun 2021 12:47 AM PDT

I am looking to see if I can connect to a remote minikube cluster (Ubuntu box) using local (Mac) kubectl. I currently use Docker and can do this very easily using docker-machine. Simply eval to the machine name, and docker will use the remote machine.

I was wondering if there was anything similar for minikube/kubectl? I have found a few articles that mention that I need to copy my remote ~/.minikube directory to my local, and change some config about. But this seems rather complicated for something a tool like docker-machine does seamlessly.

Is there a similar tool available, or if not, could someone help me with steps needed to connect to a remote cluster?

Remote Machine Currently I use the docker driver (this is the complete output of the command, just the one line):

$ minikube config view  - driver: docker  

And have a number of NodePort services:

$ kubectl get service -A  NAMESPACE     NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE  default       apigateway        NodePort    10.100.122.255   <none>        8080:30601/TCP           19h  default       discoveryserver   NodePort    10.101.106.231   <none>        8761:30602/TCP           19h  default       elasticsearch     NodePort    10.97.197.14     <none>        9200:30604/TCP           19h  default       harness           NodePort    10.97.233.245    <none>        9090:30603/TCP           19h  default       kubernetes        ClusterIP   10.96.0.1        <none>        443/TCP                  19h  default       mongo             NodePort    10.97.172.108    <none>        27017:32625/TCP          19h  kube-system   kube-dns          ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   19h  
$ kubectl config view  apiVersion: v1  clusters:  - cluster:      certificate-authority: /home/meanwhileinhell/.minikube/ca.crt      server: https://192.168.50.2:8443   <<<<<< `minikube ip`    name: minikube  contexts:  - context:      cluster: minikube      namespace: default      user: minikube    name: minikube  current-context: minikube  kind: Config  preferences: {}  users:  - name: minikube    user:      client-certificate: /home/meanwhileinhell/.minikube/profiles/minikube/client.crt      client-key: /home/meanwhileinhell/.minikube/profiles/minikube/client.key  

Local machine

$ kubectl config view  apiVersion: v1  clusters:  - cluster:      certificate-authority-data: DATA+OMITTED      server: https://kubernetes.docker.internal:6443    name: docker-desktop  - cluster:      certificate-authority: /Users/mih.mac/remote/.minikube/ca.crt      server: https://192.168.1.5:8443   <<<<<< Static IP of my remote machine    name: minikube  contexts:  - context:      cluster: docker-desktop      user: docker-desktop    name: docker-desktop  - context:      cluster: minikube      user: minikube    name: minikube  current-context: docker-desktop  kind: Config  preferences: {}  users:  - name: docker-desktop    user:      client-certificate-data: REDACTED      client-key-data: REDACTED  - name: minikube    user:      client-certificate: /Users/mih.mac/remote/.minikube/client.crt      client-key: /Users/mih.mac/remote/.minikube/client.key  

letsencrypt failed authorization procedure

Posted: 08 Jun 2021 12:07 AM PDT

I'm receiving the following error when attempting to renew my ssl certificate

Failed authorization procedure. karaokeottawa.com (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from https://karaokeottawa.com/.well-known/acme-challenge/9r6EbnCikawdhdRogJArWveNngC5bu7T9Cp5fNISWwg [45.77.185.160]: "<!doctype html>\n<!--[if lt IE 7]> <html class=\"ie6 oldie\"> <![endif]-->\n<!--[if IE 7]> <html class=\"ie7 oldie\"> <![endif]-->\n"

IMPORTANT NOTES: - The following errors were reported by the server:

Domain: karaokeottawa.com Type: unauthorized Detail: Invalid response from https://karaokeottawa.com/.well-known/acme-challenge/9r6EbnCikawdhdRogJArWveNngC5bu7T9Cp5fNISWwg [45.77.185.160]: "<!doctype html>\n<!--[if lt IE 7]> <html class=\"ie6 oldie\"> <![endif]-->\n<!--[if IE 7]> <html class=\"ie7 oldie\"> <![endif]-->\n"

To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address.

How to restrict access to a specific file on iis windows 2012

Posted: 08 Jun 2021 04:06 AM PDT

IIS on Windows 2012 R2

A website has a number of directories and in one of those directories is restricted-page.html to which I want to restrict access to all except a particular windows user.

The rest of the site is to be freely browsable by anybody.

Following instructions at https://weblogs.asp.net/gurusarkar/setting-authorization-rules-for-a-particular-page-or-folder-in-web-config I expected that putting the following web.config into the directory containing restricted-page.html would work.

<?xml version="1.0" encoding="UTF-8"?>  <configuration>    <location path="restricted-page.html">      <system.web>        <authentication mode="Windows"/>        <authorization>           <allow users="windows-domain\account-name"/>            <deny users="*"/>                          // deny others        </authorization>      </system.web>    </location>    <location path="*">      <system.web>        <authentication mode="Windows"/>        <authorization>           <allow users="*"/>         </authorization>      </system.web>    </location>  </configuration>  

However, users can't browse into the containing directory without requiring authentication.

Could anyone advise?

ansible not working become

Posted: 08 Jun 2021 04:06 AM PDT

I'm using task ansible

playbook.yml

---  - hosts: servers    remote_user: user    become: True    become_user: user    become_method: sudo    gather_facts: no      tasks:      - name:    ┆ ┆ copy:    ┆ ┆ ┆ src: editProxy.sh    ┆ ┆ ┆ dest: /tmp/editProxy.sh    ┆ ┆ ┆ mode: 0755      ┆ - name: run edit proxy settings for apt     ┆ ┆ command: /tmp/editProxy.sh  

editProxy.sh

#!/bin/bash    if grep -q "old_proxy" /etc/apt/apt.conf; then      sed -i 's/old_proxy/new_proxy/g' /etc/apt/apt.conf;  fi  

run playbook ansible-playbook palybook.yml --extra-vars='ansible_become_pass=passwd

script copy to servers, and not return error

changed: [10.1.1.1]

But changes on the server do not happen, if you run the script manually on the server, the changes take place. What could be the problem

How to configure Array on DL360 GEN 8 Server Running Ubuntu

Posted: 08 Jun 2021 12:07 AM PDT

Hi all I have an DL360 GEN 8 Machine. And I need to install ubuntu server on it. However. Accourding to HP, THE Dyanamic Smart Array System is not support. Since I would love to make use of array.. Given that I have 4 Terra bytes of disk .. Which I need to configure Raid 1.

Blockquote

Is there any possible way to achive this Raid without HP Dynamic Array system. ???

Blockquote

Below is the message from HP. Which says that I need to disable Smart array before I can be able to run Ubuntu

HP Dynamic Smart Array System is certified with Dynamic Smart Array disabled. To disable Dynamic Smart Array:

  • Press F9 to boot into RBSU

  • Navigate to System Options > HP Dynamic Smart Array B320i and select disable

  • Go to System options > SATA Controller options and select Legacy SATA or AHCI

  • Reboot the machine and now you will be able to install the OS

Docker 1.6.0 on RHEL 6.5 with SELinux, can't run containers without root

Posted: 08 Jun 2021 03:08 AM PDT

I'm trying to run a container on a RHEL 6.5 but I keep hitting this problem:

sudo docker run -u postgres -it registry/postgres /bin/bash  /bin/bash: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: Permission denied  

When run as user 'root', the container starts fine but the problem appears again when trying to switch to another user:

$ sudo docker run -u root -it registry/database /bin/bash  [root@8a20410eaa5e /]# su postgres  su: /bin/bash: Permission denied  

This is a specific container built by us, based on CentOS 6.5 an that runs Postgres. The Dockerfile to build it has "USER postgres" in it, and it works fine elsewhere except these servers. I can reproduce the same behaviour with a busybox container:

$ sudo docker run -u nobody -it 10.188.13.136:8080/busybox  / $ ls  /bin/sh: ls: Permission denied  

The RHEL 6.5 host has SELinux enabled. We have other other hosts where SELinux and this container works fine there. The audit log for this host looks clean, no error messages that I can see when trying to run the container.

This is what we've tried so far:

  • update the SELinux policies in RHEL ("sudo yum upgrade selinux-policy"), as they were not the latest versions
  • get SELinux into permissive mode (setenforce 0); not tried to switch it off completely and reboot
  • start the Docker daemon with "--selinux-enabled=true"
  • start the container with --privileged
  • start the container with --security-opt=:label:disable
  • we're running the latest RHEL 6.5 kernel: 2.6.32-504.16.2.el6.x86_64

Also run a strace session for the 'su ' command within the container but could not see much beyond these:

 17    setgid(10000)                     = 0   17    setuid(10000)                     = 0   17    munmap(0x7f07a3540000, 2101304)   = 0   17    munmap(0x7f07a311c000, 2113776)   = 0   17    munmap(0x7f07a2f03000, 2196352)   = 0   17    munmap(0x7f07a2cea000, 2198192)   = 0   17    munmap(0x7f07a2ae8000, 2101272)   = 0   17    munmap(0x7f07a28e4000, 2109624)   = 0   17    munmap(0x7f07a26e0000, 2109672)   = 0   17    munmap(0x7f07a24d3000, 2148896)   = 0   17    munmap(0x7f07a22d0000, 2105488)   = 0   17    munmap(0x7f07a20cb000, 2113848)   = 0   17    munmap(0x7f07a1ec5000, 2118168)   = 0   17    munmap(0x7f07a3321000, 2221912)   = 0   17    execve("/bin/bash", ["bash"], [/* 15 vars */]) = -1 EACCES (Permission denied)   17    write(2, "su: ", 4)               = 4   17    write(2, "/bin/bash", 9)          = 9  

The full strace dump is here in case it's needed: http://pastebin.com/42C2B8LP.

We're not sure what to look for next, any ideas?

What actions in Office 365 trigger requests for new SAML tokens?

Posted: 08 Jun 2021 02:00 AM PDT

We're in the process of diagnosing an issue where our on-premise ADFS servers stop accepting requests from the ADFS proxy servers for short (5m intervals).

One behavior that we're having difficulty understanding is that when ADFS stops responding, Outlook client users get prompted to re-authenticate, and get disconnected when the token request times out. One suggestion was that there is some sort of network session reset, but we have been unable to identify this happening on the network path for Outlook users.

Per the documentation and Microsoft support, users are issued login tokens with a default TTL of 8 hours. If that were true, why are the users being challenged to re-authenticate?

How should I manually add IP addresses to denyhosts?

Posted: 08 Jun 2021 02:00 AM PDT

I have a few IP addresses I want to add manually to denyhosts because they're huge sources of inbound spam. What's the best way to do this? Or should I not be messing with it?

I want to manually add these to denyhosts, but I don't see a way to do it through any program options. I see nothing in denyhosts.py --help.

It looks like it could be as simple as adding a line to /etc/hosts.deny, but since the process to delete an IP (see here on ServerFault and the DenyHosts FAQ) involves updating six files, it makes me think it's not "Can't You Just... add the IP to the file?".

Asterisk / Elastix Address Book and Call Recording

Posted: 08 Jun 2021 04:06 AM PDT

I am trying to build a small asterisk based PBX using Elastix. It'll be having 4 FXO (2 nos.no ISDN, normal analog POTS, 2 nos. GSM connections using a GSM Terminal) and 4 FXS (2 IP Phones and 2 Android SIP Client).

I am confused about the following two issues and need your help:

  1. I need to record all Incoming / Outgoing Calls along with their Caller Ids, Is any special hardware needed ?

  2. I have about 5000-6000 contacts which I want to show up on my IP Phones menu, so that users can dial by selecting / searching the name / company. How can this be implemented and which is the most cost effective IP-Phone to purchase for contacts list of this size.

Thanks a lot for your time

Connecting to network printer(s) using net use

Posted: 08 Jun 2021 03:57 AM PDT

I was asked to add pinters to all users on a terminal server. There is a VPN connection between the terminal server and the network where the printer is installed.

I do not have much experience with network shares, but I managed to connect to the printer manually. (win+r > \192.168.xx.xx). After entering my credentials (Domain: ADAM.local) I see the shares in explorer, including a couple of printers. Double-clicking a printer adds it to the "printers and devices" and I am able to select it as a printer wehn trying to print a document.

I was hoping to be able to use "net use" to write a script that will connect a user to the printer on startup.

I tried using net use * \\192.168.xx.xx <password> /user:ADAM.local\printACC /persistent:yes to connect to the network share. This results in an error: System error 67 has occurred. The network name cannot be found.

Could anyone help me with the syntax and parameters for the net use command?

No comments:

Post a Comment