Saturday, October 30, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


cert-manager k8s not generating tls.crt

Posted: 30 Oct 2021 10:46 PM PDT

I've installed cert-manager exactly as described in this link https://medium.com/@jorge.gongora2610/how-to-get-a-free-ssl-certificate-for-kubernetes-with-cert-manager-26339b95e92e

when deployed the ingress.yaml of my node-hello server in my namespace, all I see in secrets is a tls.key without a tls.crt

What am I doing wrong?

please help

Thanks!

Google Cloud Load Balancer with App Engine - 404

Posted: 30 Oct 2021 10:36 PM PDT

I'm trying to set up a load balancer using a serverless backend service (App engine). I followed the tutorial here

  • The external IP address is reserved

  • the SSL certificate was created (clicking on the SSL cert. name shows the domain status with green ticks, and certificate chain)

So, the frontend seems to be functional. The problem I have seems to come from the backend.

I selected a Serverless NEG as Backend type; HTTP/2 protocol; I enabled Cloud CDN and the recommended cache static content Cache mode.

I added a new backend. The selected region is 'Central US' just like with my AppEngine. As for the NEG Type, I selected App Engine, and the default service name.

I think I have the most basic backend configuration we can have here. But something is not working. This : https://LOAD_BALANCER_IP_ADDRESS loads a 404, along with Content Security Policy errors (The page's settings blocked the loading of a resource at inline ("default-src")).

The troubleshooting guide says a 404 is due to the serverless resource that doesn't exist. However, if I reset my custom DNS settings so they don't point to the LB, it does work. My App engine is there and it's operational. The App Engine logs are there to confirm it.

It seems to me the problem comes from the backend instance of the LB.

Now, in the load balancing menu, I go to the 'Backends' section at the top, and select my backend. Here I have the list of 'General properties' of my backend. Except, under 'Backends', it says the following : Backends contain instance groups of VMs or network endpoint groups. This backend service has no backends yet edit

From there, I can click the edit link, which redirects me to the 'Backend service edit' menu. I DO have a backend selected in there. I did create a serverless NEG using App Engine, as explained above.

We have the option to see a monitoring chart, when we select the LB, then the monitoring section. In my case, it shows traffic is balanced between Europe/America/Asia, the backend service subsection shows the name of my backend service. However, the bottom subsection named 'Backend Instance' shows : NO_BACKEND_SELECTED

I'm assuming this is where the issue is. Has anyone been able to build the same configuration with App Engine ? What does NO_BACKEND_SELECTED mean ? There is no explanation on Google's doc.

configuration management tool that can add text to a file unless already present

Posted: 30 Oct 2021 10:28 PM PDT

I would like to know if any of the widely used tools like Puppet / Chef / Ansible etc. can keep track of state that consists in the presence or otherwise of some stuff in particular file or files, regardless of any other contents of the file. I am asking not just "in theory", i.e. can a clever recipe / extension for the tool be written that does this, but rather: is it reasonably easy or natural to do so, or maybe is there a recipe like this that comes with the distribution?

An example would be adding the customary line

${IPADDR}   ${FQDN} ${HOST}  

to /etc/hosts , assuming all the ingredients are known, if and only if a line of this form isn't already present. But note that I'm not interested in solving this particular special case but rather the general case of detecting a chunk of content in a file and adding it if necessary.

Also, this is similar to but not the same as applying patches, because the chunk could be anywhere in the file, not tied to any surrounding context.

Allowing a single user or domain to relay through Postfix

Posted: 30 Oct 2021 05:40 PM PDT

I'm running Postfix on a RHEL7 server. I've started to use a new iPhone to send email, and I'm seeing this in mail.log:

Oct 30 20:15:56 kyushu2 postfix/smtpd[31145]: warning: hostname ue.tmodns.net does not resolve to address 172.58.200.63 Oct 30 20:15:56 kyushu2 postfix/smtpd[31145]: connect from unknown[172.58.200.63] Oct 30 20:15:56 kyushu2 postfix/smtpd[31145]: NOQUEUE: reject: RCPT from unknown[172.58.200.63]: 454 4.7.1 xxx@kxxx.com: Relay access denied; from=tim@timboyer.org to=xxx@xxx.com proto=ESMTP helo=<smtpclient.apple>

My assumption is that Postfix sees me as trying to use timboyer.org as an open relay. I don't particularly want to allow all iPhone users to use my mail server as a relay. Is there a way to allow just @timboyer.org to relay?

Thanks,

Tim

GCP site-to-site VPN traffic through Palo Alto

Posted: 30 Oct 2021 05:16 PM PDT

I'm looking for some directions. Has anyone implemented the use case described in this lab [Palo Alto Networks: VM-Series Advanced Deployment with site-to-site vpn to onprem?

Question. Which vpc did you terminate the vpn traffic for both the inbound and outbound traffic to pass through the firewall?

I terminated in a vpc other than the firewall in a peered gcp hub and spoke, now both inbound and outbound traffic are bypassing firewall.

Another approach that I took that failed: 

Make an internal load balancer in the untrusted  vpc terminate the vpc traffic on the untrusted piping the inbound traffic through this ilb to the backend service (PAN) instances.  The problem with this approach is that the internal load balancer is failing backend healthcheck. Reason being that, I'm unable to add the GCP health check source IP to the untrusted nic, since there route has to be unique. 

Has anyone implemented something similar, can you share some thoughts and ideas? 

How important is it to have authentication on a dockerized database

Posted: 30 Oct 2021 03:26 PM PDT

The docker-compose example on the MongoDB docker hub page has a root password provided to the database and the app, but as far as I know, with docker's networking, only the other containers defined in the compose file would have access to the database container.

So how important is it to have a password on the database if the container isn't exposed externally?

Kubernetes pods can ping external IPs but not any domain

Posted: 30 Oct 2021 03:21 PM PDT

I have a Kubernetes cluster using the Antrea CNI.

The problem is that I can't curl any domain names.

I can do nslookup inside the pod and get the IP of any domain, but I can't directly curl the domain.

For example, I can't curl https://google.com but I can curl https://1.1.1.1

Am I missing something, or is it normal? What do I need to do in order to fix this?

Here is the pod's container's ip route show table all

default via 10.42.4.1 dev eth0   10.42.4.0/24 dev eth0 scope link  src 10.42.4.26   broadcast 10.42.4.0 dev eth0 table local scope link  src 10.42.4.26   local 10.42.4.26 dev eth0 table local scope host  src 10.42.4.26   broadcast 10.42.4.255 dev eth0 table local scope link  src 10.42.4.26   broadcast 127.0.0.0 dev lo table local scope link  src 127.0.0.1   local 127.0.0.0/8 dev lo table local scope host  src 127.0.0.1   local 127.0.0.1 dev lo table local scope host  src 127.0.0.1   broadcast 127.255.255.255 dev lo table local scope link  src 127.0.0.1   fe80::/64 dev eth0  metric 256   local ::1 dev lo table local  metric 0   local fe80::e08c:e8ff:fef3:4877 dev eth0 table local  metric 0   multicast ff00::/8 dev eth0 table local  metric 256  

My cluster's cidr is 10.42.0.0/16

How to Configure NGINX to run as user "test-ssh"

Posted: 30 Oct 2021 03:18 PM PDT

User already created "test-ssh" and add group "clp" is it possible to use

i created user using command group createsudo groupadd clp create user and add to clp groupsudo useradd -g clp test-ssh

ssh with WAN IP timeout

Posted: 30 Oct 2021 05:29 PM PDT

I have trouble setting up ssh clone for gitea. I use port 2222:22 for the docker, and port forwarding is set up on my router. I could ssh git@localhost -p 2222, but could not ssh git@<public_ip> -p 2222 with error Connection timed out

I have checked the port forwarding work by launching a http server by python3 -m http.server 2222 and open http://<public_ip>:2222 and it works.

I am running the docker image within openmediavault, which runs as a VM in proxmox. I don't touch firewall settings for both of them. Any idea?

ngx_http_proxy_connect_module with user and password

Posted: 30 Oct 2021 03:37 PM PDT

I am using Nginx with ngx_http_proxy_connect_module, and I want to know if it possible to use with user and password, something like this

curl -vvv "ifconfig.me" -x user:password@localhost:8000  

Here my nginx.conf

worker_processes auto;    daemon off;    events { }    http {      server_names_hash_bucket_size 128;        server {          listen 8000;            resolver 1.1.1.1;            proxy_connect;          proxy_connect_allow all;          proxy_connect_connect_timeout 10s;          proxy_connect_read_timeout 10s;          proxy_connect_send_timeout 10s;            auth_basic "Restricted Content";          auth_basic_user_file /etc/nginx/.htpasswd;            location / {              proxy_pass http://$http_host;              proxy_set_header Host $http_host;          }      }  }  

Testing, no luck:

curl -vvv "ifconfig.me" -x user:password@localhost:8000    *   Trying 127.0.0.1:8000...* TCP_NODELAY set  * Connected to localhost (127.0.0.1) port 8000 (#0)  * Proxy auth using Basic with user 'user'  > GET http://ifconfig.me/ HTTP/1.1  > Host: ifconfig.me  > Proxy-Authorization: Basic dXNlcjpwYXNzd29yZA==  > User-Agent: curl/7.68.0  > Accept: */*  > Proxy-Connection: Keep-Alive  >   * Mark bundle as not supporting multiuse  < HTTP/1.1 401 Unauthorized  < Server: nginx/1.21.3  < Date: Sat, 30 Oct 2021 18:07:44 GMT  < Content-Type: text/html  < Content-Length: 179  < Connection: keep-alive  * Authentication problem. Ignoring this.  < WWW-Authenticate: Basic realm="Restricted Content"  <   <html>  <head><title>401 Authorization Required</title></head>  <body>  <center><h1>401 Authorization Required</h1></center>  <hr><center>nginx/1.21.3</center>  </body>  </html>  * Connection #0 to host localhost left intact  *  

Installing Kubernetes on Ubuntu 18.04 LTS (with Docker) - fails on init

Posted: 30 Oct 2021 03:02 PM PDT

I am attempting to install Kubernetes on VMs running Ubuntu 10.04 LTS, and running into a problem when trying to initialise the system, the kubeadm init command results in failure (full log below).

VM: 2 CPUs, 512mb RAM, 100 gig disk, running under VMWare ESXi6.

OS: Ubuntu 18.04 LTS server install, fully updated via apt update and apt upgrade before beginning the Docker and Kubernetes installs.

Docker installed as per instructions here, install completes with no errors: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

Kubernetes installed as per instructions here, except for the Docker section (as following those instructions produces a PreFlight error re systemd/cgroupfs): https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/

All installation appears to proceed smoothly with no errors reported, however attempting to start Kubernetes then fails, as shown in the log below.

I am entirely new to both Docker and Kubernetes though I get the main concepts and have experimented with the on-line tutorials on kubernetes.io, but until I can get a working system installed I'm unable to progress further. At the point at which kubeadm attempts to start the cluster, everything hangs for the four minutes, and then exits with the timeout as shown below.

root@k8s-master-dev:~# sudo kubeadm init --pod-network-cidr=10.244.0.0/16  [init] Using Kubernetes version: v1.15.3  [preflight] Running pre-flight checks  [preflight] Pulling images required for setting up a Kubernetes cluster  [preflight] This might take a minute or two, depending on the speed of your internet connection  [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'  [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"  [kubelet-start] Activating the kubelet service  [certs] Using certificateDir folder "/etc/kubernetes/pki"  [certs] Generating "ca" certificate and key  [certs] Generating "apiserver-kubelet-client" certificate and key  [certs] Generating "apiserver" certificate and key  [certs] apiserver serving cert is signed for DNS names [k8s-master-dev kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.24.0.100]  [certs] Generating "front-proxy-ca" certificate and key  [certs] Generating "front-proxy-client" certificate and key  [certs] Generating "etcd/ca" certificate and key  [certs] Generating "etcd/server" certificate and key  [certs] etcd/server serving cert is signed for DNS names [k8s-master-dev localhost] and IPs [10.24.0.100 127.0.0.1 ::1]  [certs] Generating "etcd/peer" certificate and key  [certs] etcd/peer serving cert is signed for DNS names [k8s-master-dev localhost] and IPs [10.24.0.100 127.0.0.1 ::1]  [certs] Generating "etcd/healthcheck-client" certificate and key  [certs] Generating "apiserver-etcd-client" certificate and key  [certs] Generating "sa" key and public key  [kubeconfig] Using kubeconfig folder "/etc/kubernetes"  [kubeconfig] Writing "admin.conf" kubeconfig file  [kubeconfig] Writing "kubelet.conf" kubeconfig file  [kubeconfig] Writing "controller-manager.conf" kubeconfig file  [kubeconfig] Writing "scheduler.conf" kubeconfig file  [control-plane] Using manifest folder "/etc/kubernetes/manifests"  [control-plane] Creating static Pod manifest for "kube-apiserver"  [control-plane] Creating static Pod manifest for "kube-controller-manager"  [control-plane] Creating static Pod manifest for "kube-scheduler"  [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"  [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s  [kubelet-check] Initial timeout of 40s passed.    Unfortunately, an error has occurred:          timed out waiting for the condition    This error is likely caused by:          - The kubelet is not running          - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:          - 'systemctl status kubelet'          - 'journalctl -xeu kubelet'    Additionally, a control plane component may have crashed or exited when started by the container runtime.  To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.  Here is one example how you may list all Kubernetes containers running in docker:          - 'docker ps -a | grep kube | grep -v pause'          Once you have found the failing container, you can inspect its logs with:          - 'docker logs CONTAINERID'  error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster  

I've had a look at both the log journal data and the docker logs but other than lots of timeouts, can't see anything that explains the actual error. Can anyone advise where I should be looking, and what's most likely to be the cause of the problem?

Things already tried: Removing all IPTables rules and setting defaults to "accept". Running with Docker install as per the vitux.com instructions (gives a PreFlight warning but no errors, but same timeout on attempting to init Kubernetes).

Update: Following from @Crou's comment, here is what happens now if I try just 'kubeadm init' as root:

root@k8s-master-dev:~# uptime   16:34:49 up  7:23,  3 users,  load average: 10.55, 16.77, 19.31  root@k8s-master-dev:~# kubeadm init  [init] Using Kubernetes version: v1.15.3  [preflight] Running pre-flight checks  error execution phase preflight: [preflight] Some fatal errors occurred:          [ERROR Port-6443]: Port 6443 is in use          [ERROR Port-10251]: Port 10251 is in use          [ERROR Port-10252]: Port 10252 is in use          [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists          [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists          [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists          [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists          [ERROR Port-10250]: Port 10250 is in use          [ERROR Port-2379]: Port 2379 is in use          [ERROR Port-2380]: Port 2380 is in use          [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty  [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`  

Re the very high load shown bu uptime, that starts as soon as the init is first attempted and load remains very high unless a kibeadm reset is done to clear everything down.

dcdiag DNS test fails, but DNS seems to be working properly

Posted: 30 Oct 2021 10:03 PM PDT

Active Directory setup:

Single forest, 3 domains, with 1 domain controller each. All running server 2008 R2, with the same domain/forest functional level.

DNS clients are configured as follows:

DC1 -> DC2 (prim), DC1 (sec)

DC2 -> DC1 (prim), DC2 (sec)

DC3 -> DC1 (prim), DC3 (sec)

All zones are replicated throughout the entire forest, and each DNS server is set-up with 8.8.8.8/8.8.4.4 as forwarders.

Problem:

Everything appears to be working as should. AD is replicating properly, DNS is responsive and not causing any issues, BUT when I run dcdiag /test:dns, the enterprise DNS test fails on DC2 and DC3 with the following error:

TEST: Forwarders/Root hints (Forw) Error: All forwarders in the forwarder list are invalid.

Error: Both root hints and forwarders are not configured or

broken. Please make sure at least one of them works.

Symptoms:

Event viewer is constantly showing these 2 event ID's for DNS client:

ID 1017 - The DNS server's response to a query for name INTERNAL RECORD indicates that no records of the type queried are available, but could indicate that other records for the same name are present.

ID 1019 - There are currently no IPv6 DNS servers configured for any interface on this host. Please configure DNS server settings, or renew your dynamic IP settings. (strange, as IPv6 is disabled on the network card)

nslookup is working as expected, and finding any and all records appearing in ID 1017, no matter which DNS server I select to use.

While running dcdiag, the following events appear:

Event ID 10009: DCOM was unable to communicate with the computer 8.8.4.4 using any of the configured protocols.

DCOM was unable to communicate with the computer 8.8.8.8 using any of the configured protocols.

Event ID 1014: Name resolution for the name 1.0.0.127.in-addr.arpa timed out after none of the configured DNS servers responded.

I've run wireshark while dcdiag is running its test, and the internal DNS servers do resolve anything thrown at them, but then the server continues querying Google DNS and root hints.

What the hell is going on? What am I missing here?

Edit: The actual enterprise DNS test error messages are:

         Summary of test results for DNS servers used by the above domain         controllers:                DNS server: 128.63.2.53 (h.root-servers.net.)               1 test failure on this DNS server               Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 128.63.2.53            DNS server: 128.8.10.90 (d.root-servers.net.)               1 test failure on this DNS server               PTR record query for the 1.0.0.127.in-addr.arpa. failed on the DNS server 128.8.10.90               Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 128.8.10.90            DNS server: 192.112.36.4 (g.root-servers.net.)               1 test failure on this DNS server               Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 192.112.36.4  

etc., etc.

Traefik + k8s + Let's Encrypt wildcard SSL + Cloudflare issue

Posted: 30 Oct 2021 09:03 PM PDT

I'm trying to set-up a reverse proxy with wildcard SSL using Traefik, with a DNS challenge against a Cloudflare zone.

I have this config in k8s:

kind: ConfigMap  apiVersion: v1  metadata:    name: traefik-https    namespace: kube-system  data:    traefik.toml: |      # traefik.toml      defaultEntryPoints = ["http","https"]      [entryPoints]        [entryPoints.http]        address = ":80"        [entryPoints.http.redirect]        entryPoint = "https"        [entryPoints.https]        address = ":443"        [entryPoints.https.tls]      [acme]      email = "notmyrealemail@example.com"      storage = "/etc/traefik/acme.json"      entryPoint = "https"      caServer = "https://acme-v02.api.letsencrypt.org/directory"      [[acme.domains]]      main = "*.notmyrealsite.com"      sans = ["notmyrealsite.com"]      [acme.dnsChallenge]      provider = "cloudflare"  

I'm passing the right CLOUDFLARE_API_KEY and CLOUDFLARE_EMAIL env vars in the upstream container, but I'm seeing this error in the console:

time="2018-06-13T09:47:39Z" level=error msg="Unable to obtain ACME certificate for domains \"*.notmyrealsite.com,notmyrealsite.com\" : cannot obtain certificates: acme: Error -> One or more domains had a problem:\n[notmyrealsite.com] acme: Error 403 - urn:ietf:params:acme:error:unauthorized - Incorrect TXT record \"HREckyrZXY7uCVLaUkoYzadxkHbwfFavNWS_v14yMzk\" found at _acme-challenge.notmyrealsite.com\n"

I'm not sure whether this means the CF login is successful and has been updated (but just with the wrong TXT record), or whether that's what it's expecting to see - and nothing is there.

Looking at the DNS entries in CF reveals no TXT records at all.

(I'm only on the free CF plan, so I don't get any raw logs to see what attempts were made against the DNS)

What could be causing the TXT mismatch?

DNS Suffix Search list does not work when Group Policy applies the "DNS Suffix Search List"

Posted: 30 Oct 2021 04:04 PM PDT

I have a DNS Suffix Search list applied through Group Policy in an AD Domain with Windows 2012 server. When the DNS Suffix Search list is applied with Group Policy to the computers of a domain - those computers cannot ping a single qualified hostname and have it append the fqdn. As soon as the Group Policy is blocked - by doing block inheritance and the same DNS Suffix search list is manually input on the Network Adapter under DNS --> Append These DNS Suffix (in order); then it works - which is the same place the GPO puts those suffix.

In Linux it works great and it works in windows but only when done manually. Please help - I know this Group Policy setting is meant to accomplish this.

optimizing my.cnf for my server - database using all RAM

Posted: 30 Oct 2021 07:01 PM PDT

I have a vps with 12GB RAM. Currently it has one wordpress website hosted. The website gets about 10k UV a day with about 30k views according to Jetpack.

I am getting alot of error establishing a database connection errors.

Here is my my.cnf file: ( I know I may have done something wrong, but I copied the file from the internet and added it to my server)

[mysqld]  performance-schema=0  [client]  port=3306  socket="/var/lib/mysql/mysql.sock"    [mysqld]  performance-schema=0  innodb_additional_mem_pool_size=16M  innodb_buffer_pool_size=10G  innodb_file_per_table=1  innodb_log_buffer_size=4M  innodb_flush_log_at_trx_commit=2  log-bin=mysql-bin  myisam_sort_buffer_size=64M  expire_logs_days=7  query_cache_size=128M  thread_cache_size=12  max_allowed_packet=15M  skip-federated  table_definition_cache=2048  local-infile=0  table_open_cache=8192  max_connections=60  read_buffer_size=2M  slow_query_log=1  slow_query_log_file="/var/log/slow_queries.log"  thread_concurrency=16  sort_buffer_size=2M  port=3306  join_buffer_size=16M  key_buffer_size=600M  query_cache_limit=10M  socket="/var/lib/mysql/mysql.sock"  skip-external-locking  query-cache-type=1  long_query_time=5  default-storage-engine=InnoDB  tmp_table_size=384M  max_heap_table_size=384M    [myisamchk]  read_buffer=2M  key_buffer=256M  sort_buffer_size=256M  

and this is the output of mysqltuner.pl:

[--] Skipped version check for MySQLTuner script  [OK] Currently running supported MySQL version 5.6.35-log  [OK] Operating on 64-bit architecture    -------- Log file Recommendations -----------------------------------------------------        -------------  [--] Log file: /var/lib/mysql/host1.cloudserverpanel.com.err(1M)  [OK] Log file /var/lib/mysql/host1.cloudserverpanel.com.err exists  [OK] Log file /var/lib/mysql/host1.cloudserverpanel.com.err is readable.  [OK] Log file /var/lib/mysql/host1.cloudserverpanel.com.err is not empty  [OK] Log file /var/lib/mysql/host1.cloudserverpanel.com.err is smaller than 32 Mb  [!!] /var/lib/mysql/host1.cloudserverpanel.com.err contains 3340 warning(s).  [!!] /var/lib/mysql/host1.cloudserverpanel.com.err contains 22 error(s).  [--] 6 start(s) detected in /var/lib/mysql/host1.cloudserverpanel.com.err  [--] 1) 2017-05-21 20:34:30 16325 [Note] /usr/sbin/mysqld: ready for connections.  [--] 2) 2017-05-21 20:19:27 11180 [Note] /usr/sbin/mysqld: ready for connections.  [--] 3) 2017-05-21 20:03:04 7569 [Note] /usr/sbin/mysqld: ready for connections.  [--] 4) 2017-05-21 19:49:21 5230 [Note] /usr/sbin/mysqld: ready for connections.  [--] 5) 2017-05-20 06:10:06 713 [Note] /usr/sbin/mysqld: ready for connections.  [--] 6) 2017-05-19 19:37:13 5772 [Note] /usr/sbin/mysqld: ready for connections.  [--] 15 shutdown(s) detected in /var/lib/mysql/host1.cloudserverpanel.com.err  [--] 1) 2017-05-21 20:29:32 14760 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 2) 2017-05-21 20:28:23 13925 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 3) 2017-05-21 20:28:19 11180 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 4) 2017-05-21 20:14:38 9938 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 5) 2017-05-21 20:14:28 9693 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 6) 2017-05-21 20:10:17 8761 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 7) 2017-05-21 20:10:14 7569 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 8) 2017-05-21 20:03:03 5230 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 9) 2017-05-21 19:48:03 4757 [Note] /usr/sbin/mysqld: Shutdown complete  [--] 10) 2017-05-21 19:47:54 4531 [Note] /usr/sbin/mysqld: Shutdown complete    -------- Storage Engine Statistics ----------------------------------------------------        -------------  [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MEMORY +MRG_MYISAM +MyISAM +P        ERFORMANCE_SCHEMA  [--] Data in MyISAM tables: 229K (Tables: 6)  [--] Data in InnoDB tables: 138M (Tables: 94)  [OK] Total fragmented tables: 0    -------- Security Recommendations -----------------------------------------------------        -------------  [OK] There are no anonymous accounts for any database users  [OK] All database users have passwords assigned  [!!] There is no basic password file list!    -------- CVE Security Recommendations -------------------------------------------------        -------------  [--] Skipped due to --cvefile option undefined    -------- Performance Metrics ----------------------------------------------------------        -------------  [--] Up for: 20h 22m 22s (3M q [41.069 qps], 61K conn, TX: 13G, RX: 515M)  [--] Reads / Writes: 93% / 7%  [--] Binary logging is enabled (GTID MODE: OFF)  [--] Physical Memory     : 11.6G  [--] Max MySQL memory    : 12.3G  [--] Other process memory: 1.4G  [--] Total buffers: 11.1G global + 20.5M per thread (60 max threads)  [--] P_S Max memory usage: 0B  [--] Galera GCache Max memory usage: 0B  [!!] Maximum reached memory usage: 12.3G (106.49% of installed RAM)  [!!] Maximum possible memory usage: 12.3G (106.31% of installed RAM)  [!!] Overall possible memory usage with other process exceeded memory  [OK] Slow queries: 0% (0/3M)  [!!] Highest connection usage: 100%  (61/60)  [!!] Aborted connections: 15.70%  (9695/61744)  [!!] name resolution is active : a reverse name resolution is made for each new connect        ion and can reduce performance  [!!] Query cache may be disabled by default due to mutex contention.  [OK] Query cache efficiency: 88.9% (2M cached / 2M selects)  [OK] Query cache prunes per day: 0  [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 10K sorts)  [OK] No joins without indexes  [!!] Temporary tables created on disk: 86% (8K on disk / 10K total)  [OK] Thread cache hit rate: 98% (951 created / 61K connections)  [OK] Table cache hit rate: 96% (186 open / 193 opened)  [OK] Open file limit used: 0% (61/65K)  [OK] Table locks acquired immediately: 99% (275K immediate / 275K locks)  [OK] Binlog cache memory access: 99.95% (17590 Memory / 17599 Total)    -------- Performance schema -----------------------------------------------------------        -------------  [--] Performance schema is disabled.  [--] Memory used by P_S: 0B  [--] Sys schema isn't installed.    -------- ThreadPool Metrics -----------------------------------------------------------        -------------  [--] ThreadPool stat is disabled.    -------- MyISAM Metrics ---------------------------------------------------------------        -------------  [!!] Key buffer used: 18.1% (113M used / 629M cache)  [OK] Key buffer size / total MyISAM indexes: 600.0M/211.0K  [OK] Read Key buffer hit rate: 99.4% (28K cached / 163 reads)  [!!] Write Key buffer hit rate: 53.2% (8K cached / 4K writes)    -------- InnoDB Metrics ---------------------------------------------------------------        -------------  [--] InnoDB is enabled.  [--] InnoDB Thread Concurrency: 0  [OK] InnoDB File per table is activated  [OK] InnoDB buffer pool / data size: 10.0G/138.5M  [!!] Ratio InnoDB log file size / InnoDB Buffer pool size (0.9375 %): 48.0M * 2/10.0G s        hould be equal 25%  [!!] InnoDB buffer pool instances: 8  [--] InnoDB Buffer Pool Chunk Size not used or defined in your version  [OK] InnoDB Read buffer efficiency: 99.97% (14114607 hits/ 14119114 total)  [!!] InnoDB Write Log efficiency: 61.48% (39497 hits/ 64247 total)  [OK] InnoDB log waits: 0.00% (0 waits / 24750 writes)    -------- AriaDB Metrics ---------------------------------------------------------------        -------------  [--] AriaDB is disabled.    -------- TokuDB Metrics ---------------------------------------------------------------        -------------  [--] TokuDB is disabled.    -------- XtraDB Metrics ---------------------------------------------------------------        -------------  [--] XtraDB is disabled.    -------- RocksDB Metrics --------------------------------------------------------------        -------------  [--] RocksDB is disabled.    -------- Spider Metrics ---------------------------------------------------------------        -------------  [--] Spider is disabled.    -------- Connect Metrics --------------------------------------------------------------        -------------  [--] Connect is disabled.    -------- Galera Metrics ---------------------------------------------------------------        -------------  [--] Galera is disabled.    -------- Replication Metrics ----------------------------------------------------------        -------------  [--] Galera Synchronous replication: NO  [--] No replication slave(s) for this server.  [--] This is a standalone server.    -------- Recommendations --------------------------------------------------------------        -------------  General recommendations:      Control warning line(s) into /var/lib/mysql/host1.cloudserverpanel.com.err file      Control error line(s) into /var/lib/mysql/host1.cloudserverpanel.com.err file      MySQL started within last 24 hours - recommendations may be inaccurate      Reduce your overall MySQL memory footprint for system stability      Dedicate this server to your database for highest performance.      Reduce or eliminate persistent connections to reduce connection usage      Reduce or eliminate unclosed connections and network issues      Configure your accounts with ip or subnets only, then update your configuration wit        h skip-name-resolve=1      Temporary table size is already large - reduce result set size      Reduce your SELECT DISTINCT queries without LIMIT clauses      Performance should be activated for better diagnostics      Consider installing Sys schema from https://github.com/mysql/mysql-sys  Variables to adjust:    *** MySQL's maximum memory usage is dangerously high ***    *** Add RAM before increasing MySQL buffer variables ***      max_connections (> 60)      wait_timeout (< 28800)      interactive_timeout (< 28800)      query_cache_size (=0)      query_cache_type (=0)      performance_schema = ON enable PFS      innodb_log_file_size * innodb_log_files_in_group should be equal to 1/4 of buffer p        ool size (=5G) if possible.      innodb_buffer_pool_instances(=10)  

Ansible can't git clone from enterprise git server

Posted: 30 Oct 2021 03:02 PM PDT

Hi I have enterprise git server where I created a private test-repo and added a ssh-key on the deploy ssh key form. I defined a git role in my common roles which is having below yml definition.

---    - name: github enterprise private key    copy: >      src=id_rsa_ghe      dest=/etc/id_rsa_ghe      owner=root      group=root      mode=0600    - name: clone test-repo project    git:      repo: git@git.example-private.com:code/test-repo.git      dest: /etc/test-repo      accept_hostkey: true      key_file: /etc/id_rsa_ghe  

in roles/common/git I defined files folder where I put my private key for git clone however I am still getting error as below

fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["/usr/bin/git", "fetch", "--tags", "origin"], "failed": true, "msg": "Failed to download remote objects and refs: ERROR: Repository not found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"}

Below are my system details. And I am running this playbook locally on one of my server.

$ lsb_release -a  No LSB modules are available.  Distributor ID: Ubuntu  Description:    Ubuntu 14.04.5 LTS  Release:    14.04  Codename:   trusty    $ansible --version  ansible 2.2.1.0    config file = /etc/ansible/ansible.cfg    configured module search path = Default w/o overrides  

Below is the actual error which I get repository not found.

Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/source_control/git.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo ~/.ansible/tmp/ansible-tmp-1487398723.48-100968102221507" && echo ansible-tmp-1487398723.48-100968102221507="echo ~/.ansible/tmp/ansible-tmp-1487398723.48-100968102221507" ) && sleep 0' <127.0.0.1> PUT /tmp/tmp2Bijvu TO /home/ubuntu/.ansible/tmp/ansible-tmp-1487398723.48-100968102221507/git.py <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1487398723.48-100968102221507/ /home/ubuntu/.ansible/tmp/ansible-tmp-1487398723.48-100968102221507/git.py && sleep 0' <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1487398723.48-100968102221507/git.py; rm -rf "/home/ubuntu/.ansible/tmp/ansible-tmp-1487398723.48-100968102221507/"

/dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { "changed": false, "cmd": [ "/usr/bin/git", "fetch", "--tags", "origin" ], "failed": true, "invocation": { "module_args": { "accept_hostkey": true, "bare": false, "clone": true, "depth": null, "dest": "/etc/dotfiles", "executable": null, "force": false, "key_file": "/etc/id_rsa_ghe", "recursive": true, "reference": null, "refspec": null, "remote": "origin", "repo": "git@git.example-private.com:code/test-repo.git", "ssh_opts": null, "track_submodules": false, "umask": null, "update": true, "verify_commit": false, "version": "HEAD" }, "module_name": "git" }, "msg": "Failed to download remote objects and refs: ERROR: Repository not found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n" }

Proper way to override Mysql my.cnf on CentOS/RHEL?

Posted: 30 Oct 2021 06:03 PM PDT

Context: I'm porting an opensource server software (and writing associated documentation) from Debian/Ubuntu to CentOS/RHEL.

For the software to run correctly, I need to add a dozen of specific parameters to Mysql configuration (example: increase max_allowed_packet).

From a Debian point of view, I known I can override Mysql's my.cnf by adding a file to /etc/mysql.d, say /etc/mysql.d/my-software.cnf.

My question is: how to do the same correctly on CentOS/RHEL ?

Other infos:

  • I know where mysqld looks for its configuration file thanks to https://dev.mysql.com/doc/refman/5.7/en/option-files.html. But, for CentOS, I don't understand:
    • how NOT to directly edit /etc/my.cnf (that may not be package-update-proof)
    • where to add my specific Mysql parameters
  • Reading the CentOS Mysql init script (/etc/init.d/mysql), I've seen that a /etc/sysconfig/mysqld is sourced, but I don't know how to add configuration parameters.
  • I've search for combinations of override / my.cnf / centos on ServerFault, StackOverflow and also DBA.StackExchange, but found nothing relevant.
  • I make all the tests within a "centos:6" Docker container
  • the software is Asqatasun https://github.com/Asqatasun/Asqatasun

How can a Windows user change the initial password from the command line in a remote domain?

Posted: 30 Oct 2021 04:04 PM PDT

We have Windows 7 desktops and a Windows Server 2012R2 server. I have a user who needs to map a network drive which is on a server in a different AD domain from ours (over the WAN). I have created an account for him in AD over there, and I set it to "User must change password at next logon". How can he map the network drive?

Mapping is easy to do, ostensibly... But when he attempts to do so, Windows gives an error that he must change his password, yet it does not provide a prompt to do so.

I have no desktops in the remote domain that he can log into. Is there a way to set the password remotely? I have checked https://serverfault.com/questions/570476/how-can-a-standard-windows-user-change-their-password-from-the-command-line but I don't think the techniques given work over two separate domains. Furthermore I'm not a Powershell user :-( (I can answer your Bash questions, though! :-) )

Thanks.

If you can't change the RDS endpoint of an AWS Beanstalk instance, how do you do a blue/green deployment?

Posted: 30 Oct 2021 09:03 PM PDT

From what I can tell, one can't change the Amazon RDS (RDS) endpoint of an existing Elastic Beanstalk (EB) instance?

If that is the case, than you can't have your code deployed to a stage server, stage DB, tested, then promoted to use the prod DB?

So how do you deploy stage without having to test against the prod db?

Given prod and stage, I thought the strategy would be something like this:

  • Snapshot prod RDS
  • Create stage with new code and point it at the snapshot
  • QA stage
  • Point stage to prod RDS
  • Change load balancer to send traffic to stage

Office 2013 Slow to Open/Save with Folder Redirection

Posted: 30 Oct 2021 10:03 PM PDT

We recently deployed folder redirection for a few individuals in the office. We are using a DFS Namespace share on a Server 2012r2 VM. We are redirecting Desktop and My Documents only. Clients are running 8.1 and 7.

When using Word/Excel 2013, there is a popup that says "trying to connect to: \\DFSNAME\userfolder" and its stays there for 1-5 minutes before the browse window opens. This also occurs when trying to attach a file to an email in outlook. There are no delays if the file is double clicked on their desktop.

We've tried the following solutions (whcih seemed to describe our problem perfectly aside from the version):

The only thing that is different about this deployment of Folder Redirection is permissions. Instead of following the standard checkbox of exclusive access we used this ancient guide from microsoft - http://support.microsoft.com/kb/288991/. Could our permissions be causing these weird issues?

Using Webdriver with Chrome — missing Shared Libraries

Posted: 30 Oct 2021 11:00 PM PDT

I am trying to run webdriver, but I keep getting the following error:

[ec2-user@ip-172-30-0-41 ~]$ sudo ./chromedriver   ./chromedriver: error while loading shared libraries:    libgconf-2.so.4: cannot open shared object file: No such file or directory  

Is there a way to yum this missing dependencies? Or what seems to be the issue here? This is using the Amazon Linux AMI 2014.09.1 (HVM) Distribution.

Windows 8.1 keeps prompting for Network Share Credentials after every log on or restart

Posted: 30 Oct 2021 07:01 PM PDT

I have a Network drive Shared in a Workgroup with 3 clients. Two clients with Windows 7 have persistent connections to the Share. No issues with those two.

My windows 8.1 client keeps prompting for credentials at every restart / log on. I spent hours looking around for a solution:

  • I have stored cred in cred manager, and tried every possible combination (WORKGROUP\user , COMPUTERNAME\user, user, .. and so on).
  • I have changed NT and NTLM negotiation in policy manager.
  • I've compared the settings under GPO network security with a working win 7 computer, everything is pretty much the same. -I've captured Wireshark to see SMB negotiation process, honestly I see the messages flowing around, and the share sending AUTH DENIED.. which means is how the 8.1 client formats the request.... that makes the share reject it.. Now I still don't really know why.

Any ideas would be appreciated.

Trouble with port 80 nating (XenServer to WebServer VM)

Posted: 30 Oct 2021 08:07 PM PDT

I have a rent server running XenServer 6.2 I only have 1 public IP so i did some NAT to redirect ports 22 and 80 to my WebServer VM. I have a problem with the port 80 redirection.

When i use this redirection, i can get in the WebServer's Apache but this server lose Web access.

I get this kind of error :

W: Failed to fetch http://http.debian.net/debian/dists/wheezy/main/source/Sources  404  Not Found [IP: 46.4.205.44 80]  

but i can ping anywhere.

XenserverIP:80 redirected to 10.0.0.2:80 (WebServer).

This is the port 80 redirection part of my XenServer iptables :

-A PREROUTING -i xenbr1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0  .2:80    -A INPUT -i xenbr1 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT  COMMIT  

What is wrong in my configuration? Is there a problem with XenServer?

Thanks for your help !

Edit : Here is my iptables full content :

*nat  :PREROUTING ACCEPT [51:4060]  :POSTROUTING ACCEPT [9:588]  :OUTPUT ACCEPT [9:588]  -A PREROUTING -p tcp -m tcp --dport 1234 -j DNAT --to-destination 10.0.0.2:22  -A PREROUTING -i xenbr1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0  .2:80  -A POSTROUTING -s 10.0.0.0/255.255.255.0 -j MASQUERADE  COMMIT  *filter  :INPUT ACCEPT [5434:4284996]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [5014:6004729]  -A INPUT -i xenbr1 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT  COMMIT  

Update :

I have a second server with 10.0.0.3 as IP and it has the same problem that 10.0.0.2 has.

I think i found a little bit of an explain :

I have apache which listen on 10.0.0.2:80 Since i have NAT forwarding rule on my Xenserver, all incoming traffic from external network (website requests, downloads...) is routed to 10.0.0.2:80 because it uses port 80.

That is why i have the same problem on my 2nd VM. If i try to do an apt-get update, i make request to websites which return to port 80 therefore is routed to apache.

Anybody can help me solve this issue? (It's problematic i can't access websites on my internal LAN if my Apache Server is running ^^)

Apache2 virtual host redirection issue on Chrome

Posted: 30 Oct 2021 06:03 PM PDT

I am having an extremely bizarre issue that seems only present on Chrome, IE and Firefox are fine. I have 2 website being served by 1 IP address, I have 2 identical files in sites-available named site1.com and site2.com. I run the a2ensite command to create the links to sites-enabled.

All redirections are working perfectly, except for site1.com using Chrome.

On Chrome, if I type www.site1.com it redirects me to the right folder /var/www/site1.com , if I type http://site1.com it redirects me to the wrong folder /var/www

Now this is where it gets bizarre, when I type www.site2.com it redirects me to /var/www/site2.com and when I type http://site2.com it redirects me correctly to /var/www/sites2.com

What I don't get, is the virtual host files are identical bar the actual ServerName & Alias and log locations.

Site1

<VirtualHost *:80>          ServerAdmin webmaster@site1.com          ServerName site1.com          ServerAlias www.site1.com          DocumentRoot /var/www/site1.com            #<Directory />          #        Options FollowSymLinks          #        AllowOverride None          #</Directory>          #<Directory /var/www/site1.com>          #        Options Indexes FollowSymLinks MultiViews          #        AllowOverride None          #        Order allow,deny          #        allow from all          #</Directory>              # Possible values include: debug, info, notice, warn, error, crit,          # alert, emerg.          LogLevel warn          ErrorLog /var/www-logs/site1.com/error.log          CustomLog /var/www-logs/site1.com/access.log combined  </VirtualHost>  

Site2

<VirtualHost *:80>          ServerAdmin webmaster@site2.com          ServerName site2.com          ServerAlias www.site2.com          DocumentRoot /var/www/site2.com            #<Directory />          #        Options FollowSymLinks          #        AllowOverride None          #</Directory>          #<Directory /var/www/site2.com>          #        Options Indexes FollowSymLinks MultiViews          #        AllowOverride None          #        Order allow,deny          #        allow from all          #</Directory>              # Possible values include: debug, info, notice, warn, error, crit,          # alert, emerg.          LogLevel warn          ErrorLog /var/www-logs/site2.com/error.log          CustomLog /var/www-logs/site2.com/access.log combined  </VirtualHost>  

Setting variable depending on NAS-IP-Address in Freeradius

Posted: 30 Oct 2021 05:06 PM PDT

The setup

We currently have a Freeradius server used to authenticate our Wifi users against our Active Directory server. The link between Freeradius and the Active Directory is done by Winbind.

In order for the user to be able to obtain authorization, it needs to be belong to a group in the Activer Directory. This is done by adding an argument to the ntlm_auth command.

What we are trying to achieve

We are now adding 802.1X to our cabled networks and would like to re-use the existing Radius server to authenticate against the same Active Directory.

Everything will be the same except the authorization will need to be based on whether the user belongs to a different one than that of the Wifi networks.

What we have already tried

I have read many things on freeradius in the documentation and have seen that it is possible to use conditionnals and variables. My plan therefore was to put a variable in the ntlm_auth command that would contain the group SID (as suggested on Freeradius mailing-lists). The group SID would be dependent on the IP of the network device which should be contained in "NAS-IP-Address".

This should just be a case of writing a simple conditionnal statement and setting a variable. Nonetheless, I have not been able to do this as Freeradius will not start everytime I try to add a conditionnal to the configuration files.

So my questions are :

  • How do I set a variable in function of the NAS-IP-Address ?

  • In which files can such syntax be used ?

getpwnam("www") failed in /etc/nginx/nginx.conf

Posted: 30 Oct 2021 03:13 PM PDT

I copied the nginx.conf sample onto my ubuntu 12.04 box (I don't know where to put the other conf files. I'm an nginx noob). When I try to start nginx I get the following error:

abe-lens-laptop@abe:/etc$ sudo service nginx start  Starting nginx: nginx: [emerg] getpwnam("www") failed in /etc/nginx/nginx.conf:1  nginx: configuration file /etc/nginx/nginx.conf test failed  

What does this error mean? How can I fix it? I found this post but my user is already set to www www (if you see in the linked file) How do I change the NGINX user?

nginx php5-fpm path_info urls and root location

Posted: 30 Oct 2021 05:06 PM PDT

Hello to all nginx & php gurus

I'm installing dotclear (a blogging software written in PHP) on my debian, and I have a hard time configuring nginx, php5-fpm and php so that :

  1. I can use PATH_INFO url rewriting since I'm following Tim Berneer's Lee advice that urls should'nt expose what particular technology you use right now http://www.w3.org/Provider/Style/URI.html
  2. satic-files are not parsed by PHP since it's terribly insecure to let example.org/uploads/image.jpg/index.php to be sent to PHP
  3. have a root location that just works example.com should be rewritten to something like example.com/index.php?start

It seems that until now, I have to choose 2, that's why I'm asking for help here.

So here is my current /etc/nginx/nginx.conf

  server {     server_name articles.eloge-de-la-folie.fr;     root /srv/data1/articles.eloge-de-la-folie.fr  ;

index index.php?start ;  location / {           try_files $uri $uri/ @pathinfo ;           #try_files $uri $uri/ /index.php$uri?$args;  }  

# Pretty URLs in dotclear # activate PATH_INFO urls in /admin/blog_pref.php location @pathinfo { rewrite ^ /index.php$uri?$args last; } location = / { rewrite ^ /index.php?start last; } location ~ ^(.+.php)(/.*)?$ { include fastcgi_params_pathinfo ; }

}

I put everything fastcgi related in a separate /etc/fastcgi_params_pathinfo config file

  fastcgi_param   QUERY_STRING        $query_string;  fastcgi_param   REQUEST_METHOD      $request_method;  fastcgi_param   CONTENT_TYPE        $content_type;  fastcgi_param   CONTENT_LENGTH      $content_length;    #fastcgi_param  SCRIPT_FILENAME     $request_filename;  fastcgi_param   SCRIPT_NAME     $fastcgi_script_name;  fastcgi_param   PATH_INFO               $fastcgi_path_info;  fastcgi_param   REQUEST_URI     $request_uri;  fastcgi_param   DOCUMENT_URI        $document_uri;  fastcgi_param   DOCUMENT_ROOT       $document_root;  fastcgi_param   SERVER_PROTOCOL     $server_protocol;    fastcgi_param   GATEWAY_INTERFACE   CGI/1.1;  fastcgi_param   SERVER_SOFTWARE     nginx/$nginx_version;    fastcgi_param   REMOTE_ADDR     $remote_addr;  fastcgi_param   REMOTE_PORT     $remote_port;  fastcgi_param   SERVER_ADDR     $server_addr;  fastcgi_param   SERVER_PORT     $server_port;  fastcgi_param   SERVER_NAME     $server_name;    fastcgi_param   HTTPS           $https;    # PHP only, required if PHP was built with --enable-force-cgi-redirect  fastcgi_param   REDIRECT_STATUS     200;      # this is what I changed  fastcgi_pass    127.0.0.1:9000;  fastcgi_index   index.php;  fastcgi_split_path_info ^(.+\.php)(.*)$;  fastcgi_param PATH_INFO $fastcgi_path_info;  fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;  

Also in /etc/php5/fpm/pool.d/www.conf, I made sure to uncomment this

  security.limit_extensions = .php ;  

What happen currently ? - example.com/index.php and example.com/post/test are passed to the php interpretor and work - example.com/css/style.css are not passed to php and works - but when I go to example.com, the index.php is just downloaded, not interpreted.

My location = / { configuration here } is apparently never matched :(

Thanks in advance,

Jean-Michel

IIS bandwidth Monitoring

Posted: 30 Oct 2021 08:07 PM PDT

I have a public MS CRM 2011 install and one of my remote users reported using about 10gig of data from their Outlook client.

Is it possible in real time to see connected users in IIS and how much data they are consuming ? (Dedicated server no other users on it)

I don't have access to the external firewall so all monitoring would have to be taken off the local IIS server. Perfmon I think can do this but wanted to see if there where any other ways of doing this.

Why might `ls --color=always` be slow for a small directory?

Posted: 30 Oct 2021 06:51 PM PDT

For a certain directory DIR on my system, the ls --color=always takes about 8 seconds, although it contains less than 10 files and subdirectories. Without the color argument it takes no time.

Why would ls take so long with the color argument, and how can I find out what exactly is taking so long? It is probably some subdirectory in DIR that is mounted, but how can I find out which one is the troublemaker?

Difference SQLSERVER and MSSQLSERVER services

Posted: 30 Oct 2021 11:00 PM PDT

I have two SQL Server services in Sql Server Configuration Manager: SQLEXPRESS and MSSQLSERVER. I have no idea what the differences are. I think that SQLEXPRESS is the free version, but I don't know how I got it and I can't remove it either because it doesn't show up in remove programs.

But here's where it gets weird: I installed SQL Server Enterprise, and during installation I specified a local user (SQLServices) to be used for all SQL Server services. Okay, so this worked for SQL Server Analysis Services (MSSQLSERVER) and SQL Server Integration Services10.0 (MSSQLSERVER), they are running under this user. But SQL Server (MSSQLSERVER) does NOT run and gives an error that it can't connect/time out etc., and SQL Server (SQLEXPRESS) runs, but under NT AUTHORITY\NETWORK SERVICE. I stopped this, and tried to run the SQL Server (MSSQLSERVER), but it keeps timing out on me. What's going on?

No comments:

Post a Comment