Tuesday, July 19, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Linux Switch Networks/interfaces & NameServers Quickly

Posted: 19 Jul 2022 09:51 AM PDT

I was wondering if anyone knows of a way to switch to a different network back and forth quickly.

For example, possibly setting up another eth0 and eth1 In /etc/network/interfaces i have to add a new address, netmask, and gateway for another network to connect to it.

Then here i have a question:

in /etc/resolv.conf

I have to add a nameserver IP, how can I have eth0 look at a specific nameserver and eth1 look at a different IP in the nameserver?

I would like to quickly be able to switch between a VPN or Gateway and then my normal IP/network.

Move Laravel live site to local server for archive purpose. "I only have public files"

Posted: 19 Jul 2022 09:48 AM PDT

My task is to move live Laravel site to local server. The site will be used as archive on local lan. What is right procedure for this ? I searched and found that there has to be .env and composer.json files. There aren't any.

My server is up and running (apache, php, mysql). Web page is moved to folder that is accessed through local domain, imported db, created users, manually edited application/config/database.php file.

Any help would be much appreciated

How do I apply a redirect and mask in htaccess?

Posted: 19 Jul 2022 09:32 AM PDT

How do I apply a redirect and mask in htaccess.?

I want to redirect localhost:8080/mysite/build to localhost:8080/mysite/pages/build/index.php but leaving the url localhost:8080/mysite/build there at the top of the browser as typed by the user like a kind of mask.

I really don't know how to do this in htaccess.

What has happened to veeam instant recovery VM Files?

Posted: 19 Jul 2022 08:48 AM PDT

Recently, I get one of my servers to be repaired. Since one of VM on this server(server A) was very important to us, I made an instant recovery from my backups to another server(server B).

After one week the server A returned from repair company. But I forgot to migrate instant recovery VM to it's original location. While VM got bigger as I think, the server stopped responding. I try to migrate it,but migration option was grey on right click and stop session was active.

I refered to vsphere client and power off VM in order to activate migration option in veeam. Again migration option was grey and VM became orphaned. When I stop session of instant recovery, veeam delete all VM files with cruelty.
What should I do now?
Veeam log on stopping session

I can not run any query on BigQuery in region europe-west2 anymore (transient error n°4233314)

Posted: 19 Jul 2022 09:07 AM PDT

I can no longer query nor preview any table nor run any job in my BigQuery (region europe-west2). When I run a query in an other region, I don't get the error so it's specific to my region. When trying to run any query, I get the following error in the GCP Console:

An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 4233314

Does Google Cloud include Google Workspace?

Posted: 19 Jul 2022 08:47 AM PDT

Complete newbie, please help me - seems a simple question but I can't find the answer online. If I decide to use Google Cloud to host my Wordpress.com site, is Google Workspace included or do I have to pay for that separately? Thanks.

How to configure apache so that only one organizational unit from windows server has access to the page

Posted: 19 Jul 2022 08:18 AM PDT

Im trying to configure ldap on apache in rocky linux but i want to allow only the users that belong to the group ITGeral from Win server to have access to the page. how do i configure that? i will leave here my configuration so far

Listen 41414 https <VirtualHost *:41414>

        ServerAdmin     webmail@soitezes.local          DocumentRoot    "/var/www/www.soitezes.local"          Servername      www.soitezes.local          ServerAlias     www.soitezes.local          <Directory /var/www/www.soitezes.local>                  Options Indexes FollowSymlinks MultiViews                  AllowOverride none                  Order allow,deny                  Allow from all                  AuthType basic                  AuthBasicProvider ldap                  AuthName "Digite o username e a password como faz no login do windows"                  AuthUserFile "/dev/null"                  AuthLdapURL "ldap://10.48.70.1:389/DC=seunome,DC=local?sAMAccountName?sub?(objectClass=*)"                  AuthLdapBindDN "plinha@seunome.local"                  AuthLdapBindPassword    "Passw0rd"                  AuthLdapGroupAttribute on                  Require ldap-group CN=jcalhau,OU=Angola,DC=seunome,DC=local,DC=Formacao          </Directory>    ErrorLog logs/www.soitezes.local_error_log  Transferlog logs/www.soitezes.local_access_log  LogLevel warn    SSLEngine on    SSLCertificateFile /etc/pki/tls/certs/www.soitezes.local.crt    SSLCertificateKeyFile /etc/pki/tls/private/www.soitezes.local.key    <FilesMatch "\.(cgi|shtml|phtml|php)$">      SSLOptions +StdEnvVars  </FilesMatch>  <Directory "/var/www/cgi-bin">      SSLOptions +StdEnvVars  </Directory>    BrowserMatch "MSIE [2-5]" \           nokeepalive ssl-unclean-shutdown \           downgrade-1.0 force-response-1.0  

Nginx 404 Not Found when entering a specific url path

Posted: 19 Jul 2022 09:19 AM PDT

What I am trying to do is to prepare this machine to serve my web application. It works without any problems when I serve it via npm start. I have a domain, let's say, example.com on which I installed SSL with Certbot and have successfully rerouted http to https with www.example.com and example.com.

Now what happens is when I go to www.example.com or example.com it works fine, and I can move to whatever url path I want, as long as it is directly accessed from any button on the homepage, but whenever I try to access an individual url path, let's say myexample.com/admin/login (it is hidden from the homepage) or any manually entered url path, the webserver returns error 404. This does not happen when I do the same with npm start.

I am running an EC2 machine with the following components installed:

  • Operating System: Ubuntu 20.04 LTS
  • Webserver: Nginx 1.18
  • Node: 10.24.1
  • NPM: 6.14.12
  • SSL Provider: Let'sEncrypt (Certbot 0.40.0)

I have the following ports open from the EC2 Management Console:

  • 80/custom anywhere (http)
  • 443/custom anywhere (https)
  • 22/custom my IP (ssh)
  • 3000/Custom my IP (npm start default port)

I have also allocated an Elastic IP to my machine, let's say it's 1.1.1.1 .

My Nginx config file is located at: /etc/nginx/sites-enabled and is called: client-config

I have set up my Nginx client-config to look like this:

server {      server_name 1.1.1.1 example.com www.example.com; # 1.1.1.1 this is an example ip      root /home/ubuntu/app-deploy/build;      index index.html; # react by default generate the file in the build directory      location / {          try_files $uri $uri/ =404;      }        listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot      }  server {      if ($host = www.example.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          if ($host = example.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          listen 80;      server_name 1.1.1.1 example.com www.example.com; # 1.1.1.1 this is an example ip      return 404; # managed by Certbot          }  

Running nginx -t returns:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok  nginx: configuration file /etc/nginx/nginx.conf test is successful  

Any help would be appreciated as I am new to this.

I want to assign public ip to my openvpn access server clients

Posted: 19 Jul 2022 07:43 AM PDT

I want to assign public ip to my openvpn access server clients not private ip as i can use this public ip to access my localhosts in each machine Am already owned my cloud openvpn access server on digitalocean

How to have a single pipeline run and single commit in github?

Posted: 19 Jul 2022 07:30 AM PDT

In jenkins, we can schedule poll scm to check for every 1 hr, 2hr,etc.

In the meanwhile, even if we add n number of commits, let's say poll scm is 1 hr

After 1 hr, it will check for latest commits and if any new, it will launch once. If no commits, it won't run the pipeline.

Is there any similar approach in github actions?

Our build process takes atleast 1 hr.

By the time it completes, if there are 2 commits added, it is keeping 2 pipeline runs in queue.

In gitlab merge requests, when I create an MR for my dev branch, it will merge all the new commits as a single commit including all those changes.

But, in github, when a pull request is created, it is merging all the commits (say 10) and creates a new commit as pull request merge, (10+1).

Any possibility to have similar approach like gitlab?

I saw squash option but it is not the same as I saw in gitlab.

Debian SSH connection stops working after client tries to login

Posted: 19 Jul 2022 07:16 AM PDT

I have a SmartOS with a public IP and a Debian as a VM. For accessing the VM directly, an ipnat-rule is set to forward port 2222 to the vm on port 22. Connecting via SSH works fine for me.

But when a client tries to login, he gets timed out. Also, after he tried, i do get a timeouts when trying to connect.

While that, i still can connect to the other VMs and also can connect from the SmartOS itself, only the ipfilter-route doesn't work anymore.

I tried changing the port at the ipnat rulefile, restartet the ipfilter service, turned off iptables on the vm, searched through tcpdump on the vm and restarted the VM.

Only a reboot of the SmartOS itself helps.

The problem is, that the client needs to connect via SFTP to that VM, so i cant just say "login over the SmartOS itself" and that i dont understand, how that generally can be.

The Rule for that VM-Routing: rdr e1000g0 from any to <public-ip> port = 2222 -> 10.99.23.11 port 22 tcp

Windows Server 2016 auto failover

Posted: 19 Jul 2022 07:11 AM PDT

I have two servers running Windows Server 2016. I would like for the second one to be an exact mirror of the first one and in the event of the first one failing, for the second one to take over.

How can I create and update the existing SPF record to allow more than 10 entries?

Posted: 19 Jul 2022 07:09 AM PDT

How can I include another SPF record if my existing domain already has 10 lines of SPF records in the TXT record?

I wanted to add these two:

include:mailgun.org  include:sendgrid.net   

This is my existing SPF record:

v=spf1 ip4:38.113.1.0/24 ip4:38.113.20.0/24 ip4:12.45.243.128/26 ip4:65.254.224.0/19 include:_spf.google.com include:_spf.qualtrics.com  include:servers.mcsv.net include:spf.mandrillapp.com include:spf.protection.outlook.com include:spf.emailsignatures365.com -all  

Will this break the existing mail flow?

Any help would be greatly appreciated.

Thanks

Priorize DNS records by server locatoion

Posted: 19 Jul 2022 07:46 AM PDT

Recently I had assigned a task where I am suposed to optimize the login time for a Redhat SSO service that is installed on a RHEL 7.9. This service uses multiple Active directories to validate the user's credentials. All these AD's are spread around Europe. I would like to setup some of the AD's that i already know that are near my server to be queried first, but I also want to be able to query other ones in case if those that are closer are down. Could you help me with a solution? The most obvious one would be to add the prefered ones inside the /etc/hosts file, but this will make it impossible to reach the other ones...

Thank you very much!

No outgoing traffic allowed by Red Hat 8

Posted: 19 Jul 2022 09:06 AM PDT

I have recently been given access to a VPS with Red Hat 8 installed. I need to enable outgoing traffic in order to install certain things, but currently this doesn't work. Here are some commands that I have tried:

# ping google.com  --- google.com ping statistics ---  3 packets transmitted, 0 received, 100% packet loss, time 2029ms  
# traceroute --tcp google.com  traceroute to google.com (172.217.167.110), 30 hops max, 60 byte packets   1  * * *   2  * * *   3  * * *   4  * * *   5  * * *  
# iptables -L  Chain FORWARD (policy ACCEPT)  target     prot opt source               destination           ACCEPT     all  --  anywhere             anywhere                Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination         

I have omitted the INPUT chain because it has potentially identifiable rules.

If I add a new rule that accepts everything, it doesn't help:

iptables -I OUTPUT -p all -j ACCEPT  
# iptables -L  Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination           ACCEPT     all  --  anywhere             anywhere   
# nft list ruleset  table ip6 filter {          chain INPUT {                  type filter hook input priority filter; policy accept;                  ip6 saddr fe80::/10 ip6 daddr fe80::/10  counter packets 0 bytes 0 accept                  meta l4proto icmp  counter packets 0 bytes 0 accept                  meta l4proto ipv6-icmp  counter packets 4 bytes 388 accept                  iifname "lo"  counter packets 64 bytes 3178 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:8a6:250:56ff:feba:6e26  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:8a6:250:56ff:feba:1c7b  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:8a6:250:56ff:feba:7d6f  counter packets 0 bytes 0 accept                  iifname != "lo" ip6 daddr ::1  counter packets 0 bytes 0 reject                   ct state related,established counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:4c80::/60 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c::/48 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr fdfd:eb1a:eb10::/44 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr fe80::/10 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:8d0:250:56ff:feb5:3aee tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:8d0:16fe:b5ff:fe91:3685 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:8d0:250:56ff:feb5:4296 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip6 saddr 2001:388:608c:8d0:250:56ff:feb5:22f7 tcp dport 22  counter packets 0 bytes 0 accept                   counter packets 0 bytes 0 drop          }            chain FORWARD {                  type filter hook forward priority filter; policy accept;          }            chain OUTPUT {                  type filter hook output priority filter; policy accept;          }  }  table ip raw {          chain PREROUTING {                  type filter hook prerouting priority raw; policy accept;          }            chain OUTPUT {                  type filter hook output priority raw; policy accept;          }  }  table ip mangle {          chain PREROUTING {                  type filter hook prerouting priority mangle; policy accept;          }            chain INPUT {                  type filter hook input priority mangle; policy accept;          }            chain FORWARD {                  type filter hook forward priority mangle; policy accept;          }            chain OUTPUT {                  type route hook output priority mangle; policy accept;          }            chain POSTROUTING {                  type filter hook postrouting priority mangle; policy accept;          }  }  table ip nat {          chain PREROUTING {                  type nat hook prerouting priority dstnat; policy accept;          }            chain INPUT {                  type nat hook input priority 100; policy accept;          }            chain POSTROUTING {                  type nat hook postrouting priority srcnat; policy accept;          }            chain OUTPUT {                  type nat hook output priority -100; policy accept;          }  }  table ip filter {          chain INPUT {                  type filter hook input priority filter; policy accept;                  meta l4proto icmp  counter packets 0 bytes 0 accept                  iifname "lo"  counter packets 2 bytes 100 accept                  iifname != "lo" ip daddr 127.0.0.0/8  counter packets 0 bytes 0 reject                  meta l4proto tcp ip saddr 130.194.12.225  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.12.229  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 10.21.7.22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.12.242  counter packets 0 bytes 0 accept                   ct state related,established counter packets 131542 bytes 31134584 accept                  ip saddr 172.16.5.192/26  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.20.70 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.20.71 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.20.72 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.20.73 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 10.0.0.0/8 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.0.0/16 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 169.254.0.0/16 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 172.16.0.0/20 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 49.127.0.0/17 tcp dport 22  counter packets 1 bytes 60 accept                  meta l4proto tcp ip saddr 130.194.19.9 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.19.14 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.19.15 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.19.82 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.19.83 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.19.7 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.19.28 tcp dport 22  counter packets 0 bytes 0 accept                  meta l4proto tcp ip saddr 130.194.19.45 tcp dport 22  counter packets 0 bytes 0 accept                   counter packets 0 bytes 0 drop          }            chain FORWARD {                  type filter hook forward priority filter; policy accept;          }            chain OUTPUT {                  type filter hook output priority filter; policy accept;          }  }  

Are there any other commands I might try to diagnose this issue? Does the information I have attached provide any insight into why outgoing traffic is all dropping?

MSSQL Brute Force Attack, Error Code 4625 Audit Failure, Error Code 10016 DISTRIBUTEDCOM

Posted: 19 Jul 2022 08:19 AM PDT

We have a Windows 2012 Server from Godaddy accessible through remote desktop.

Recently we started getting Error Code 4625 Audit Failure and Error Code 10016 DISTRIBUTEDCOM, these are happening almost every second. Most affected seems to be MS SQL, which is now using almost 50% RAM and CPU and is leading to slow response in websites and databases.

Log for 4625:

Log Name:      Security    Source:        Microsoft-Windows-Security-Auditing    Date:          7/14/2022 5:30:23 AM    Event ID:      4625    Task Category: Logon    Level:         Information    Keywords:      Audit Failure    User:          N/A    Computer:      s97-74-4-227.secureserver.net    Description:    An account failed to log on.  Subject:    Security ID:      NULL SID    Account Name:     -    Account Domain:       -    Logon ID:     0x0  Logon Type:         3  Account For Which Logon Failed:    Security ID:      NULL SID    Account Name:     production    Account Domain:   Failure Information:    Failure Reason:       Unknown user name or bad password.    Status:           0xC000006D    Sub Status:       0xC0000064  Process Information:    Caller Process ID:    0x0    Caller Process Name:  -  Network Information:    Workstation Name: WIN-T889GJN22EC    Source Network Address:   -    Source Port:      -  Detailed Authentication Information:    Logon Process:        NtLmSsp    Authentication Package:   NTLM    Transited Services:   -    Package Name (NTLM only): -    Key Length:       0  

Log for 10016

Log Name:      System    Source:        Microsoft-Windows-DistributedCOM    Date:          7/14/2022 4:09:06 AM    Event ID:      10016    Task Category: None    Level:         Error    Keywords:      Classic    User:          NT SERVICE\SQL Server Distributed Replay Client    Computer:      s97-74-4-227.secureserver.net    Description:    The application-specific permission settings do not grant Local Activation permission for the COM Server application with CLSID    {6DF8CB71-153B-4C66-8FC4-E59301B8011B}    and APPID     {961AD749-64E9-4BD5-BCC8-ECE8BA0E241F}    to the user NT SERVICE\SQL Server Distributed Replay Client SID (S-1-5-80-3249811479-2167633679-2115734285-1138413726-166979568) from address LocalHost (Using LRPC) running in the application container Unavailable SID (Unavailable). This security permission can be modified using the Component Services administrative tool.  

I read on the forum to start blocking the IP using Windows Firewall Inbound rule. While doing that, we accidentally blocked "Any IP Address" and lost access to the server itself! Godaddy restored the access but now the firewall services are disabled.

Please kindly help. We are so worried.

Image1 Image2

Configuration for routing two subnets in Vagrant for Docker overlay network

Posted: 19 Jul 2022 09:25 AM PDT

For labs in Docker Deep Dive chapter 12, I set the VMs like a following Vagrantfile:

Vagrant.configure("2") do |config|    config.vm.box = "ubuntu/focal64"    config.vm.box_version = "20220215.1.0"      config.vm.define 'node1' do |node|      node.vm.provider "virtualbox" do |v|        v.name = 'node1'        v.cpus = 2        v.memory = 2048      end        node.vm.hostname = 'node1'      node.vm.network :private_network, ip: '192.168.1.2', netmask: '255.255.255.0'      node.vm.network :forwarded_port, guest: 22, host: 2222, id: 'ssh'    end      config.vm.define 'node2' do |node|      node.vm.provider "virtualbox" do |v|        v.name = 'node2'        v.cpus = 2        v.memory = 2048      end        node.vm.hostname = 'node2'      node.vm.network :private_network, ip: '172.31.1.2', netmask: '255.255.255.0'      node.vm.network :forwarded_port, guest: 22, host: 2223, id: 'ssh'    end  end  

Node1 is in the subnet 192.168.1.0/24 and node2 is in 172.31.1.0/24. They can communicate by the Vagrant default gateway(_gateway):

# In node1  vagrant@node1:~$ traceroute 172.31.1.2  traceroute to 172.31.1.2 (172.31.1.2), 30 hops max, 60 byte packets   1  _gateway (10.0.2.2)  0.158 ms  0.072 ms  0.184 ms   2  172.31.1.2 (172.31.1.2)  0.867 ms  0.847 ms  0.830 ms    # In node2  vagrant@node2:~$ traceroute 192.168.1.2  traceroute to 192.168.1.2 (192.168.1.2), 30 hops max, 60 byte packets   1  _gateway (10.0.2.2)  0.209 ms  0.150 ms  0.130 ms   2  192.168.1.2 (192.168.1.2)  1.026 ms  1.005 ms  0.987 ms  

So, the Swarm init on node1 and join in node2 are working:

# In node1  vagrant@node1:~$ docker swarm init --advertise-addr 192.168.1.2  (..eliding the join token command)    # In node2  vagrant@node2:~$ docker swarm join --token SWMTKN-1-4sr2wdfp8lokknutsl68nyodw4wdva0j5r32douzxhn7eqmh8i-2vrfp54qgmo7ya1sr9kvsa561 192.168.1.2:2377 --advertise-addr 172.31.1.2  This node joined a swarm as a worker.    # In node1  ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION  vdiaxw7xzvjidhk0lrjacxjri *   node1      Ready     Active         Leader           20.10.17  sdxuxpsji18w32saz5k9wqsa7     node2      Ready     Active                          20.10.17  

But the ping between containers on the same overlay network is not working:

# In node1  vagrant@node1:~$ docker network create -d overlay test-net  btkw44xkaucftn2pjqgdpc6k0  vagrant@node1:~$ docker service create --name test --network test-net --replicas 2 ubuntu sleep infinity  docker 3x0zzv5oguzlkgvtn5ajz0tnz  overall progress: 2 out of 2 tasks  1/2: running   [==================================================>]  2/2: running   [==================================================>]  verify: Service converged  vagrant@node1:~$ docker service ps test  ID             NAME      IMAGE           NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS  a3sjyg317nau   test.1    ubuntu:latest   node1     Running         Running 43 seconds ago  fdc5ozw6cqmx   test.2    ubuntu:latest   node2     Running         Running 43 seconds ago    vagrant@node1:~$ docker ps  CONTAINER ID   IMAGE           COMMAND            CREATED         STATUS         PORTS     NAMES  65f0a9719985   ubuntu:latest   "sleep infinity"   5 minutes ago   Up 5 minutes             test.1.a3sjyg317nauk83rxt1glr9d3  vagrant@node1:~$ docker exec -it 65f0a9719985 bash    root@65f0a9719985:/# apt update && apt install iputils-ping traceroute -y  (..eliding)  root@65f0a9719985:/# ping 10.0.1.4 # cannot ping  PING 10.0.1.4 (10.0.1.4) 56(84) bytes of data.     # In node2  vagrant@node2:~$ docker network inspect test-net | jq -r '.[].Containers | with_entries(.value = .value.IPv4Address)'  {    "e81ad230be6fc136950626c38034cd41531c6ccef9047222213d9be18f7ca83c": "10.0.1.4/24", # <- dest to test    "lb-test-net": "10.0.1.6/24"  }  

How could I configure the nodes network(guess where I missed something) for connecting containers in the overlay network?


FYI,

Inspections of the overlay network in each node are following. Almost are same except .Containers:

# In node1  vagrant@node1:~$ docker network inspect test-net  [      {          "Name": "test-net",          "Id": "btkw44xkaucftn2pjqgdpc6k0",          "Created": "2022-07-12T05:58:39.94206374Z",          "Scope": "swarm",          "Driver": "overlay",          "EnableIPv6": false,          "IPAM": {              "Driver": "default",              "Options": null,              "Config": [                  {                      "Subnet": "10.0.1.0/24",                      "Gateway": "10.0.1.1"                  }              ]          },          "Internal": false,          "Attachable": false,          "Ingress": false,          "ConfigFrom": {              "Network": ""          },          "ConfigOnly": false,          "Containers": {              "65f0a9719985c62d09f71cd422ba105d432f6260a5353c3584aecb038d560369": {                  "Name": "test.1.a3sjyg317nauk83rxt1glr9d3",                  "EndpointID": "c60ed56fa8f00cce93d4d39a2d2c89425c267e5a9ec83c9121398d88065509c1",                  "MacAddress": "02:42:0a:00:01:03",                  "IPv4Address": "10.0.1.3/24",                  "IPv6Address": ""              },              "lb-test-net": {                  "Name": "test-net-endpoint",                  "EndpointID": "5696394c8e5134617c54f2f4df741cfc96c16c1dfce7566b2e3d0f6e5a9020b2",                  "MacAddress": "02:42:0a:00:01:05",                  "IPv4Address": "10.0.1.5/24",                  "IPv6Address": ""              }          },          "Options": {              "com.docker.network.driver.overlay.vxlanid_list": "4097"          },          "Labels": {},          "Peers": [              {                  "Name": "6d52d8ef1bf0",                  "IP": "192.168.1.2"              },              {                  "Name": "6fd0a4423f64",                  "IP": "172.31.1.2"              }          ]      }  ]    # In node2  vagrant@node2:~$ docker network inspect test-net  [      {          "Name": "test-net",          "Id": "btkw44xkaucftn2pjqgdpc6k0",          "Created": "2022-07-12T05:58:40.147459194Z",          "Scope": "swarm",          "Driver": "overlay",          "EnableIPv6": false,          "IPAM": {              "Driver": "default",              "Options": null,              "Config": [                  {                      "Subnet": "10.0.1.0/24",                      "Gateway": "10.0.1.1"                  }              ]          },          "Internal": false,          "Attachable": false,          "Ingress": false,          "ConfigFrom": {              "Network": ""          },          "ConfigOnly": false,          "Containers": {              "e81ad230be6fc136950626c38034cd41531c6ccef9047222213d9be18f7ca83c": {                  "Name": "test.2.fdc5ozw6cqmxta03ijvkr1v1h",                  "EndpointID": "aab0cc3b802221d67625d57f43233b8d57a3ccc8826c9f6f7d784b3fea3d72d5",                  "MacAddress": "02:42:0a:00:01:04",                  "IPv4Address": "10.0.1.4/24",                  "IPv6Address": ""              },              "lb-test-net": {                  "Name": "test-net-endpoint",                  "EndpointID": "477a32193cc38bbc01fa54d53f454442bb2603cdf940d43c55e6d8e8c963d6f4",                  "MacAddress": "02:42:0a:00:01:06",                  "IPv4Address": "10.0.1.6/24",                  "IPv6Address": ""              }          },          "Options": {              "com.docker.network.driver.overlay.vxlanid_list": "4097"          },          "Labels": {},          "Peers": [              {                  "Name": "6fd0a4423f64",                  "IP": "172.31.1.2"              },              {                  "Name": "6d52d8ef1bf0",                  "IP": "192.168.1.2"              }          ]      }  ]  

And all port for the overlay network are opened:

# In node1  vagrant@node1:~$ netstat -tuln | grep -E '(2377|7946|4789)'  tcp6       0      0 :::2377                 :::*                    LISTEN  tcp6       0      0 :::7946                 :::*                    LISTEN  udp        0      0 0.0.0.0:4789            0.0.0.0:*  udp6       0      0 :::7946                 :::*  vagrant@node1:~$ nc -zv 172.31.1.2 7946  Connection to 172.31.1.2 7946 port [tcp/*] succeeded!  vagrant@node1:~$ nc -zv -u 172.31.1.2 7946  Connection to 172.31.1.2 7946 port [udp/*] succeeded!  vagrant@node1:~$ nc -zv -u 172.31.1.2 4789  Connection to 172.31.1.2 4789 port [udp/*] succeeded!    # In node2  vagrant@node2:~$ netstat -tuln | grep -E '(2377|7946|4789)'  tcp6       0      0 :::7946                 :::*                    LISTEN  udp        0      0 0.0.0.0:4789            0.0.0.0:*  udp6       0      0 :::7946                 :::*  vagrant@node2:~$ netstat -tuln | grep -E '(2377|7946|4789)'  vagrant@node2:~$ nc -zv 192.168.1.2 7946  Connection to 192.168.1.2 7946 port [tcp/*] succeeded!  vagrant@node2:~$ nc -zv -u 192.168.1.2 7946  Connection to 192.168.1.2 7946 port [udp/*] succeeded!  vagrant@node2:~$ nc -zv -u 192.168.1.2 4789  Connection to 192.168.1.2 4789 port [udp/*] succeeded!  

Kubernetes Pods Getting in Pending State Forever

Posted: 19 Jul 2022 07:40 AM PDT

We have a CronJob which will run every 'x' minutes.

For each and every Schedule, A Pod will be Scheduled and it runs and it is doing its job.

Now, After 2-3 days, The Pod is getting into Pending State and now the Pod is not doing its job and not even getting into Running State.

The Pod will also try to mount a few NFS mount paths when it is getting created.

We investigated and found that the Node on which this Pod is getting Launched/Scheduled is drying out of Inotify Watch Count Limit which is 8192.

I also tried to increase the Inotify Limit using

echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf  sudo sysctl -p  

But the same Issue Persists.

I tried to capture kubelet logs on the Node which the CronJob's Pod is getting scheduled.

I found the below log getting Iterated so many times. I understood that kubelet not able to unmount the NFS path when the pod which completed its job is cleaned up.

Note: Below log is a single liner, I have broken it into Lines for a better view.

Worker Node: RHEL 7.6 (Also Tried 7.9 with Same Issue)

Kernel Version: 3.10

Kubelet Version: 1.23.6

Struggling to fix this. Any recommendations will be helpful.

 kubelet[1784]: E0630 03:42:39.927719    1784 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/nfs/8da4fd7a-dbff-429a-a470-3c09e19a1670-infra-pv-xxxxx  podName:8da4fd7a-dbff-429a-a470-3c09e19a1670 nodeName:}" failed.   No retries permitted until 2022-06-30 03:44:41.927675124 -0400 EDT m=+37538.561944740 (durationBeforeRetry 2m2s).   Error: error cleaning subPath mounts for volume "infra" (UniqueName: "kubernetes.io/nfs/8da4fd7a-dbff-429a-a470-3c09e19a1670-infra-pv-xxxx")   pod "8da4fd7a-dbff-429a-a470-3c09e19a1670" (UID: "8da4fd7a-dbff-429a-a470-3c09e19a1670") :   error processing /var/lib/kubelet/pods/8da4fd7a-dbff-429a-a470-3c09e19a1670/volume-subpaths/infra-pv-xxxx/cronjob:   error cleaning subpath mount /var/lib/kubelet/pods/8da4fd7a-dbff-429a-a470-3c09e19a1670/volume-subpaths/infra-pv-xxxxx/cronjob/6:   

Failed to unmount path /var/lib/kubelet/pods/8da4fd7a-dbff-429a-a470-3c09e19a1670/volume-subpaths/infra-pv-xxxxx/cronjob/6

Error loading WSUS Console after IIS Reconfigurations

Posted: 19 Jul 2022 07:08 AM PDT

I encountered an error loading the WSUS Console after I deployed some organization-mandated reconfigurations to the IIS server that supports WSUS. As I found the solution myself and did not see this anywhere else online, I figured documenting it here may help others.

My environment was Windows Server 2019 running in a virtual machine and hosting only what the default WSUS installation provides.

--

After deploying IIS reconfigurations, the WSUS Console fails with an error that reads like this:

The WSUS administration console was unable to connect to the WSUS Server via the remote API. Verify that the Update Services service, IIS and SQL are running on the server. If the problem persists, try restarting IIS, SQL, and the Update Services Service. The WSUS administration console has encountered an unexpected error. This may be a transient error; try restarting the administration console. If this error persists, Try removing the persisted preferences for the console by deleting the wsus file under %appdata%\Microsoft\MMC. System.IO.IOException -- The handshake failed due to an unexpected packet format. Source System Stack Trace: at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) ** this exception was nested inside of the following exception ** System.Net.WebException -- The underlying connection was closed: An unexpected error occurred on a send. Source Microsoft.UpdateServices.Administration Stack Trace: at Microsoft.UpdateServices.Administration.AdminProxy.CreateUpdateServer(Object[] args) at Microsoft.UpdateServices.UI.SnapIn.Scope.ServerSummaryScopeNode.GetUpdateServer(PersistedServerSettings settings) at Microsoft.UpdateServices.UI.SnapIn.Scope.ServerSummaryScopeNode.ConnectToServer() at Microsoft.UpdateServices.UI.SnapIn.Scope.ServerSummaryScopeNode.get_ServerTools()

This is addressed in multiple places online, however, none of the solutions I found worked. The error appears to have something to do with SSL configurations, but in my case, this was a red herring. After following the advice online, including deleting the WSUS file in %appdata%\Microsoft\MMC. System.IO.IOException as the error suggests, I was unable to get the console to connect.

Use Smartcard Reader on Azure Remote Desktop

Posted: 19 Jul 2022 07:31 AM PDT

What I want to achieve:

  1. Plug a Smartcard Reader on my Laptop
  2. Connect to my private Azure VPN
  3. Use Remote Desktop to access a Server using User and Password, with Intelligent Card option and USBs checked.
  4. My Smartcard Reader appears on my device so I can sign a document in a specific application

What actually happens: Smartcard Reader never appears on my Remote Machine.

What I did try (no specific order):

  • On Remote Desktop window, checking all devices options
  • Adding a new Device through Devices and Printers' window
  • Checking Windows' Device Manager
  • Plugging the Card Reader before and after authentication

Is there something I'm missing here? Maybe some option on Azure Firewall, or something else I must install on my server?

Thank you very much!

Why am I missing /var/run/sshd after every boot?

Posted: 19 Jul 2022 09:29 AM PDT

I'm running a Ubuntu 16.04 container under Proxmox 5.2-11. After applying the latest round of patches1 I'm unable to login at the console or over ssh.

I mounted the container root FS on the hypervisor and added pts/0 to /etc/security/access.conf (we run pam_access) and that allowed root login to the console. We have root : lxc/tty0 lxc/tty1 lxc/tty2 in access.conf which I thought was sufficient so why I needed pts/0 now is puzzling.

I noticed ssh was not running so tried starting it by hand (/usr/sbin/sshd -DDD -f /etc/ssh/sshd_config) and received this error:

Missing privilege separation directory: /var/run/sshd  

I created the directory by hand, started ssh and was able to finally login, but after a reboot, the problem persists. The directory is not being created. Only useful bits in journalctl and the only interesting part is something about "operation not permitted" but no further info.

I'm not too familiar with 16.04 so wondering how I can find out more about the problem. I have no /var/log/syslog or /var/log/messages only kern.log so kind of lost.

1

systemd-sysv 229-4ubuntu21.9  libpam-systemd 229-4ubuntu21.9  libsystemd0 229-4ubuntu21.9  systemd 229-4ubuntu21.9  udev 229-4ubuntu21.9  libudev1 229-4ubuntu21.9  iproute2 4.3.0-1ubuntu3.16.04.4  libsasl2-modules-db 2.1.26.dfsg1-14ubuntu0.1  libsasl2-2 2.1.26.dfsg1-14ubuntu0.1  ldap-utils 2.4.42dfsg-2ubuntu3.4  libldap-2.4-2 2.4.42dfsg-2ubuntu3.4  libsasl2-modules 2.1.26.dfsg1-14ubuntu0.1  libgs9-common 9.25dfsg1-0ubuntu0.16.04.3  ghostscript 9.25dfsg1-0ubuntu0.16.04.3  libgs9 9.25dfsg1-0ubuntu0.16.04.3  

[2]

Nov 27 10:13:48 host16 systemd[1]: Starting OpenBSD Secure Shell server...  Nov 27 10:13:48 host16 sshd[474]: Missing privilege separation directory: /var/run/sshd  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Control process exited, code=exited status=255  Nov 27 10:13:48 host16 systemd[1]: Failed to start OpenBSD Secure Shell server.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Unit entered failed state.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Failed with result 'exit-code'.  Nov 27 10:13:48 host16 mysqld_safe[495]: Starting mysqld daemon with databases from /var/lib/mysql/mysql  Nov 27 10:13:48 host16 mysqld[500]: 181127 10:13:48 [Note] /usr/sbin/mysqld (mysqld 10.0.36-MariaDB-0ubuntu0.16.04.1) starting as process 499 ...  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Service hold-off time over, scheduling restart.  Nov 27 10:13:48 host16 systemd[1]: Stopped OpenBSD Secure Shell server.  Nov 27 10:13:48 host16 systemd[1]: Failed to reset devices.list on /system.slice/ssh.service: Operation not permitted  Nov 27 10:13:48 host16 systemd[1]: Starting OpenBSD Secure Shell server...  Nov 27 10:13:48 host16 sshd[502]: Missing privilege separation directory: /var/run/sshd  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Control process exited, code=exited status=255  Nov 27 10:13:48 host16 systemd[1]: Failed to start OpenBSD Secure Shell server.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Unit entered failed state.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Failed with result 'exit-code'.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Service hold-off time over, scheduling restart.  Nov 27 10:13:48 host16 systemd[1]: Stopped OpenBSD Secure Shell server.  Nov 27 10:13:48 host16 systemd[1]: Failed to reset devices.list on /system.slice/ssh.service: Operation not permitted  Nov 27 10:13:48 host16 systemd[1]: Starting OpenBSD Secure Shell server...  Nov 27 10:13:48 host16 sshd[503]: Missing privilege separation directory: /var/run/sshd  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Control process exited, code=exited status=255  Nov 27 10:13:48 host16 systemd[1]: Failed to start OpenBSD Secure Shell server.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Unit entered failed state.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Failed with result 'exit-code'.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Service hold-off time over, scheduling restart.  Nov 27 10:13:48 host16 systemd[1]: Stopped OpenBSD Secure Shell server.  Nov 27 10:13:48 host16 systemd[1]: Failed to reset devices.list on /system.slice/ssh.service: Operation not permitted  Nov 27 10:13:48 host16 systemd[1]: Starting OpenBSD Secure Shell server...  Nov 27 10:13:48 host16 sshd[504]: Missing privilege separation directory: /var/run/sshd  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Control process exited, code=exited status=255  Nov 27 10:13:48 host16 systemd[1]: Failed to start OpenBSD Secure Shell server.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Unit entered failed state.  Nov 27 10:13:48 host16 systemd[1]: ssh.service: Failed with result 'exit-code'.  Nov 27 10:13:49 host16 systemd[1]: ssh.service: Service hold-off time over, scheduling restart.  Nov 27 10:13:49 host16 systemd[1]: Stopped OpenBSD Secure Shell server.  Nov 27 10:13:49 host16 systemd[1]: ssh.service: Start request repeated too quickly.  Nov 27 10:13:49 host16 systemd[1]: Failed to start OpenBSD Secure Shell server.  Nov 27 10:13:49 host16 systemd[1]: ssh.service: Unit entered failed state.  Nov 27 10:13:49 host16 systemd[1]: ssh.service: Failed with result 'start-limit-hit'.  Nov 27 10:13:49 host16 systemd[1]: Started /etc/rc.local Compatibility.  Nov 27 10:13:49 host16 systemd[1]: Failed to reset devices.list on /system.slice/plymouth-quit.service: Operation not permitted  Nov 27 10:13:49 host16 systemd[1]: Starting Terminate Plymouth Boot Screen...  Nov 27 10:13:49 host16 systemd[1]: Failed to reset devices.list on /system.slice/plymouth-quit-wait.service: Operation not permitted  Nov 27 10:13:49 host16 systemd[1]: Starting Hold until boot process finishes up...  Nov 27 10:13:49 host16 systemd[1]: Failed to reset devices.list on /system.slice/rc-local.service: Operation not permitted  Nov 27 10:13:49 host16 systemd[1]: Started Hold until boot process finishes up.  Nov 27 10:13:49 host16 systemd[1]: Started Container Getty on /dev/pts/1.  Nov 27 10:13:49 host16 systemd[1]: Started Container Getty on /dev/pts/0.  Nov 27 10:13:49 host16 systemd[1]: Failed to reset devices.list on /system.slice/console-getty.service: Operation not permitted  Nov 27 10:13:49 host16 systemd[1]: Started Console Getty.  Nov 27 10:13:49 host16 systemd[1]: Reached target Login Prompts.  Nov 27 10:13:49 host16 systemd[1]: Started Terminate Plymouth Boot Screen.  Nov 27 10:13:52 host16 nslcd[338]: accepting connections  Nov 27 10:13:52 host16 nslcd[275]:    ...done.  Nov 27 10:13:52 host16 systemd[1]: Started LSB: LDAP connection daemon.  Nov 27 10:13:52 host16 systemd[1]: Failed to reset devices.list on /system.slice/cron.service: Operation not permitted  Nov 27 10:13:52 host16 systemd[1]: Started Regular background program processing daemon.  Nov 27 10:13:52 host16 systemd[1]: Failed to reset devices.list on /system.slice/atd.service: Operation not permitted  

Added systemd-tmpfiles --create output

Really bizarre.... I checked /tmp and those files don't exist enter image description here

squid block domain transparent

Posted: 19 Jul 2022 08:01 AM PDT

How to block domain with transparent on Debian Squid?

http_port 3128 transparent  

but I still have to set the manual proxy on the browser.

NB: I'm using Debian Wheezy with squid 2.7

Logging response headers in Apache reverse proxy without sending them to the client

Posted: 19 Jul 2022 09:05 AM PDT

We use serveral Apache servers as reverse proxy in front of numerous backend servers. The backend servers send a HTTP response header ("Cast") which contains an internal name of the backend server.

In the reverse proxy I would like to log the content of the backend server's response header and prevent the header from being sent to the client.

Logging the header is simple with inserting %{Cast}o in our custom LogFormat configuration. Also, preventing the header from being sent to the client is easy, by using Header unset Cast

The only problem is that when unsetting the header it cannot be logged anymore.

Is there a way to store the backend's response header in a variable, unset the header and log the variable?

Notes

  • The Apache servers being used as reverse proxies are Apache 2.2 on RHEL 6 and 2.4 on RHEL7
  • Reverse proxy rules use either ProxyPassor RewriteRule ... [P]

Why does my server running nginx/php-fpm keep losing session capability without generating any errors?

Posted: 19 Jul 2022 10:01 AM PDT

I am managing a server that has a couple dozen websites on it and they have all been working fine until last week when it was noticed that one site had seemingly lost the ability to maintain session data. Then another. (I am guessing it is affecting all sites on this server but just has not been reported yet.) I changed absolutely nothing in either site's configs recently. I have added no software to the server recently. I have not changed the general nginx or php-fpm configs. There are no errors in the nginx or php-fpm error logs that correspond to this failure. Restarting php-fpm appears to clear up the problem at least temporarily. Inevitably, the problem recurs. How is it possible that php-fpm can fail like this without producing an error message somewhere? I have been googling extensively and have not found anyone else with this problem.

The server is running RHEL 6 with nginx and php-fpm (remi repo). I can't remember if this server is running APC but I don't think it is. All patches are up to date.

I am guessing I just have hit some sort of threshold where the current php-fpm configs are insufficient, though I don't understand why I am getting no errors when that limit is reached. Here are what I suspect are the relevant php-fpm settings...

pm = dynamic  pm.max_children = 50  pm.start_servers = 5  pm.min_spare_servers = 5  pm.max_spare_servers = 35  php_admin_value[error_log] = /var/log/php-fpm/www-error.log  php_admin_flag[log_errors] = on  

Is there an error log somewhere I' missing where this would be reported? As I mentioned, there is nothing in /var/log/php-fpm/www-error.log, or in the general nginx error log or in the site-specific nginx error logs.

P.S. : I do get other kinds of error messages in all of the logs I mentioned so the lack of error messages is not a permission issue.

Here are df outputs (edited to remove identifying physical paths)...

# df -h  Filesystem            Size  Used Avail Use% Mounted on  xxx                        8.4G  3.8G  4.2G  48% /  xxx                   7.8G     0  7.8G   0% /dev/shm  xxx                   477M   79M  373M  18% /boot  xxx                        976M  713M  213M  78% /home  xxx                        976M   30M  896M   4% /tmp  xxx                        9.8G  4.6G  4.7G  50% /var      # df -i  Filesystem            Inodes IUsed   IFree IUse% Mounted on  xxx                        547584 87083   460501   16% /  xxx                   2041821    1  2041820   1%  /dev/shm  xxx                   128016    50  127966    1%  /boot  xxx                        65536   19285 46251     30% /home  xxx                        65536   173   65363     1%  /tmp  xxx                        655360 19441  635919    3%  /var  

And here is the php-fpm status page while the site is not allowing sessions to be saved...

pool:                 www  process manager:      dynamic  start time:           06/Aug/2015:10:53:06 -0400  start since:          332263  accepted conn:        2899  listen queue:         0  max listen queue:     0  listen queue len:     128  idle processes:       9  active processes:     1  total processes:      10  max active processes: 9  max children reached: 0  slow requests:        0  

Launchd script's output not being logged to system.log

Posted: 19 Jul 2022 08:01 AM PDT

Sample script:

#!/bin/bash  echo "Hello?"  ... (other things)  

When run via launchd, it definitely runs (the other things are being done) but nothing shows up in /var/log/system.log. I'm running OS 10.9 Mavericks.

I tried replacing that line with

syslog -s "HELLO?"  

but that also shows nothing. I should also add that I'm running the launchd script as a login script but specifying that it run as a daemon user. The user does not have root access. However, I've also tried running syslog as root, and still, nothing shows up in the logs.

This happens on multiple systems.

CouchDB administrator password reset

Posted: 19 Jul 2022 09:05 AM PDT

I have an install of couchDB, and somewhere along the line, a malformed request via CURL has my admin accound with a password that I don't know. Short of setting up another Couch server, then replicating to it and vice versa after a reinstall, is there anything I can do? I have edited local.ini, I deleted it. I replaced it with the one from the source folder. I restart not only couch but the entire server after every change because nothing seems to work. Anyone else run into this?

How do i stop Squid proxy from corrupting jar-files?

Posted: 19 Jul 2022 10:01 AM PDT

Our internal corporate NTLM proxy (Also Squid i think) randomly returns 407 errors for some reason, and it's pointless to even try to get someone to fix that.

I have on my Windows computer an installation of Cntlm proxy on port 3128 to be able to use non-NTLM-aware software. However, i still randomly get 407 errors from the corporate proxy.

To work around this, i setup a Squid Cache (Version 2.7.STABLE8) proxy on localhost forwarding to Cntlm, thinking i could have it retry on error.

I use the following configuration:

cache_dir ufs c:/ws/tmp/squidcache 500 16 256  http_port 3127  cache_peer 127.0.0.1 parent 3128 0 no-query default  acl all src 127.0.0.1  never_direct allow all  http_access allow all  retry_on_error on  maximum_single_addr_tries 10  maximum_object_size 100 MB  

It mostly works, but the problem is that jar-files end up slightly corrupted. I haven't figured out exactly how they are corrupted, but they are generally a couple of bytes longer than they should be, and even bytes in the beginning of the files are corrupted. And it's different each time.

I found http://javatechniques.com/blog/squid-corrupts-jar-files/ which suggests it might be a problem with mime type configuration and Squid treating jar-files as ASCII, but does not tell you how to fix it in Squid.

I tried adding

\.jar$      application/octet-stream        anthony-compressed.gif  -   image   +download  # the default  .       application/octet-stream    anthony-unknown.gif -   image   +download  

to Squids mime.conf, and clearing the cache, but that didn't help. I didn't really expect it to help since i think those are only used for proxying FTP.

How do I check a PTR record?

Posted: 19 Jul 2022 08:49 AM PDT

I need to check a PTR record to make sure that a script I have is sending emails which will actually be received by my users and not be incorrectly marked as spam.

I understand that the ISP which owns the IP range has to set up the PTR record, but how do I check if it is already set up?

No comments:

Post a Comment