Friday, March 19, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Client PC (Win7/Win8) cannot ping to VMware server (CentOS 8), but server can ping client

Posted: 19 Mar 2021 10:35 PM PDT

first of all I'm not a networking technician, so my networking lingo is very much limited.

I'm trying to help my coworker with troubleshooting a problem. So far I only rely on using search engine to find English-language solutions, and I haven't got any for this one.

We're trying to setup a (I quote from him) "web server httpd" using a CentOS 8 running on VMWare (the original server machine is running Win 10 and not a server version). Currently, the CentOS is running this command:

php artisan serve --host 192.168.xx.xx --port 8081

After that, we tried to open the said IP with the port on the vmware CentOS's Firefox. It opens up the intended page. The exact same can be said when I open the same thing on the machine's original OS.

The problem right now is that we can't open the same IP on other PCs (run Win7 and Win10) that is connected to the same ethernet. Pinging the IP via cmd always gives request time out, let alone opening the IP address on a web browser.

What we're trying to achieve is to have the other/client PCs to be able to open up the VMware IP address and thus displays the web server we have. What we've tried so far are listed below, and they haven't worked so far:

  • Turning off the server's PC firewall
  • Stopping the server's VMWare firewalld
  • Changing the VMware network setting from NAT to Bridged
  • Turning off client PC firewall.
  • Changing the IP address of the CentOS VMware to static.

What do we need to do to be able to get what we trying to achieve?

ubuntu network setup in rescue mode

Posted: 19 Mar 2021 10:17 PM PDT

Sorry for the pictures in the post it is impossible to copy text from the KVM console


Im trying setup network from rescue mode shell(no firewall and other soft)

ip 192.168.250.110/24  gw 192.168.250.2  

clear routes and interface settings

add ip and up interface

ip addr add 192.168.250.110/24 dev eth0  ip link set dev eth0 up  

interfaces after UP

check routes and ping external ip

got error "Network is upreachable", add default route

ip route add default via 192.168.250.2  

check routes and ping

got error "No response". But ping other VM on this host with addr 192.168.250.101 is OK.


Where am I wrong? Thx

Delegating domain in .ru to he.net nameservers

Posted: 19 Mar 2021 09:46 PM PDT

I'm trying to delegate my domain progopedia.ru to use he.net name servers.

I did the changes via the registrar's web site, and whois lists he.het nameserves since about 10 days ago.

Still, he.het shows me this error when I check delegation:

ERROR: Delegation was not found. Please delegate to ns1, ns2, ns3, ns4 and ns5.he.net then retry. We found ns2.hc.ru, ns1.hc.ru during our search.

Am I missing something?

(I suspect the issue might be specific do .ru domains, because I've done the same for several .com domains without any issues).

How to use scp when cygwin is used as default shell

Posted: 19 Mar 2021 08:40 PM PDT

I have installed openssh-server from optional features on a Windows machine. I can ssh and scp (I need upload only) from my Linux box; However I don't like cmd shell, so I installed Cygwin and set it as default shell for openssh-server (configure shell for openssh). Now the ssh works but scp doesn't (again from my Linux box). scp hangs until I break it with Ctrl+c and no file is transferred. When I ran it with -v option, it hangs on debug1: Sending command: scp -v -r -t /tmp/

This behavior is 100% reproducible and switching the openssh-server shell between cmd and Cygwin.bat gives the explained results all the times.

I tried to use Cygwin64/bin/bash.exe instead of Cygwin64/Cygwin.bat, it didn't solve scp problem. I remember reading somewhere: the shell shouldn't echo anything or it won't work with scp, I checked ~/.bashrc and ~/.profile and didn't see anything printing out.

I tried set TERM=linux in Cygwin.bat but it didn't help.

I tried to find a way to distinguish scp from ssh in Cygwin.bat but couldn't find a way (my goal was to not start Cygwin64/bin/bash.exe for scp).

Thanks for reading.

Brother L2740DW printer hangs on Arch Linux

Posted: 19 Mar 2021 06:03 PM PDT

Using Arch Linux, updated today, I'm trying to print to a USB-connected Brother MFC-L2740DW.

When I try to print something the printer lights up, shows "Receiving Data" for about 10 seconds, and then goes back to the home screen. Nothing is printed.

Arch is current as of today. I installed https://aur.archlinux.org/brother-mfc-l2740dw.git.

The printer does work when connected to a Ubuntu box.

# systemctl restart cups.service  # seq 1 10 | lp    Printer lights up, shows Receiving Data, then goes back to home screen.    /var/log/cups/error_log:    (nothing)    /var/log/cups/access_log:  localhost - - [19/Mar/2021:18:46:24 -0600] "POST /printers/brother HTTP/1.1" 200 325 Create-Job successful-ok  localhost - - [19/Mar/2021:18:46:24 -0600] "POST /printers/brother HTTP/1.1" 200 255 Send-Document successful-ok    /var/log/cups/page_log:  brother root 9 [19/Mar/2021:18:46:25 -0600] total 0 - localhost (stdin) - -    # lpq  brother is ready  no entries    # lpstat -t  scheduler is running  system default destination: brother  device for brother: usb://Brother/MFC-L2740DW%20series?serial=U63889B7N673848  brother accepting requests since Fri 19 Mar 2021 06:46:25 PM MDT  printer brother is idle.  enabled since Fri 19 Mar 2021 06:46:25 PM MDT  

Why do Calico rules stop `netfilter-persistent` from starting at boot time?

Posted: 19 Mar 2021 06:00 PM PDT

I run a Kubernetes cluster on Ubuntu 18.04.5 nodes.

When I reboot my nodes, netfilter-persistent fails to load because of some bogus Calico rules. As a result, no iptables rules are loaded at boot time which is very bad.

I can determine where it's failing thanks to other answers like Why can't load iptables rule with netfilter-persistent?

# /usr/share/netfilter-persistent/plugins.d/15-ip4tables start; echo $?  2  # /sbin/iptables-restore < /etc/iptables/rules.v4  iptables-restore v1.6.1: Set cali40s:ciOABCDEFG0pXH doesn't exist.    Error occurred at line: 55  Try `iptables-restore -h' or 'iptables-restore --help' for more information.    # iptables-save   # Generated by iptables-save v1.6.1 on Fri Mar 19 17:03:24 2021  *mangle  :PREROUTING ACCEPT [701:106208]  :INPUT ACCEPT [700:106148]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [2826:423410]  :POSTROUTING ACCEPT [2826:423410]  COMMIT  # Completed on Fri Mar 19 17:03:24 2021  

My question is, why is Calico leaving bogus rules behind? I know that I can removed these rules manually, but Calico creates hundreds or thousands of rules so managing them by hand is tedious. How can I prevent netfilter-persistent from failing due to these rules upon boot time?

How do you enable SSL in Jetty 9+ so that contexts can be accessed by https://localhost:8443/HelloWorld?

Posted: 19 Mar 2021 04:31 PM PDT

I am successfully able to follow the Jetty 9.4 Install/Run instructions here , and I can access http://localhost:8080/HelloWorld on my Centos7 machine.

How do I configure Jetty 9.4 so that SSL can be enabled (i.e. the end goal is to be able to access the same HelloWorld application via https://localhost:8443/HelloWorld ) ?

Is there, by any chance, a "pre-configured" Jetty that I can download that already does this with all the keystores/truststores/self-signed certificates all bundled up?

Is there perhaps a youtube tutorial that shows how a HelloWorld app can be converted to using SSL?

apache Defining worker for balancer error

Posted: 19 Mar 2021 04:26 PM PDT

I am trying to run

<VirtualHost *:443>     LogLevel debug     <IfModule mod_proxy.c>          ProxyPreserveHost On          <Proxy balancer://test0102>                  BalancerMember https://server1.domain.com route=1                  BalancerMember https://server1.domain.com route=2                  ProxySet lbmethod=bybusyness stickysession=ROUTEID timeout=1200          </Proxy>          ProxyPass / balancer://test0102/ maxattempts=1000 failonstatus=503          ProxyPassReverse / "balancer://test0102"          ProxyTimeout 1200          ProxyStatus On     </IfModule>       ServerName test.domain.com:443     SSLEngine on     SSLProxyEngine on     ProxyPreserveHost on     SSLProxyVerify none     SSLProxyCheckPeerCN off     SSLProxyCheckPeerName off  ....  </VirtualHost>  

The apache configuration works on one RHEL 7.6 server, but it fails on another RHEL 7.6. When I check syntax, I am getting

[proxy:debug] [pid ####] mod_proxy.c(2090): AH01147: Defining worker 'https://url' for balancer 'balancer://test0102' [proxy:debug] [pid ####] mod_proxy.c(2095): AH01148: Defined worker 'https://url' for balancer 'balancer://test0102'

I am using apache 2.4.6

Tried figuring out difference between modules enabled and the configuration looks similar. Any ideas on where to look further Thanks Na

Ipv6 on vlan, on linux bridge

Posted: 19 Mar 2021 03:59 PM PDT

I'm configuring ipv6 for the first time on dedicated server and cannot make it work.

System: Ubuntu 20.04

Networking: netplan

I got information that on the switch's port my server is connected to, 802.1Q tagging is enabled for ipv6, with id 1679.

My configuration is below:

lukasz.zaroda@hypervisor-1:~$ ip a  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever      inet6 ::1/128 scope host          valid_lft forever preferred_lft forever  2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master lxdbr0 state UP group default qlen 1000      link/ether ac:1f:6b:95:da:b0 brd ff:ff:ff:ff:ff:ff  3: enp1s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000      link/ether ac:1f:6b:95:da:b1 brd ff:ff:ff:ff:ff:ff  4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000      link/ether ac:1f:6b:95:da:b0 brd ff:ff:ff:ff:ff:ff      inet <redacted>/25 brd <redacted> scope global lxdbr0         valid_lft forever preferred_lft forever  5: lxdbr0.1679@lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000      link/ether ac:1f:6b:95:da:b0 brd ff:ff:ff:ff:ff:ff      inet6 <redacted>::2/64 scope global          valid_lft forever preferred_lft forever    lukasz.zaroda@hypervisor-1:~$ cat /etc/netplan/01-netcfg.yaml   network:    version: 2    ethernets:      enp1s0f0:        dhcp4: false        dhcp6: false    bridges:      lxdbr0:        interfaces: [enp1s0f0]        dhcp4: false        dhcp6: false        addresses:          - <redacted>/25        gateway4: <redacted>        nameservers:          addresses: [8.8.8.8, 8.8.4.4]    vlans:      lxdbr0.1679:        id: 1679        link: lxdbr0        dhcp4: false        dhcp6: false        gateway6: <redacted>::1        addresses:          - <redacted>::2/64        nameservers:          addresses:            - 2001:4860:4860::8888            - 2001:4860:4860::8844    lukasz.zaroda@hypervisor-1:~$ ip -6 r  ::1 dev lo proto kernel metric 256 pref medium  <redacted>::/64 dev lxdbr0.1679 proto kernel metric 256 pref medium  default via <redacted>::1 dev lxdbr0.1679 proto static metric 1024 pref medium    lukasz.zaroda@hypervisor-1:~$ ip -6 neigh  <redacted>::1 dev lxdbr0.1679  FAILED  

What am I missing? Or is my approach wrong? I didn't deal with vlans before.

unable to connect to VM using SSH and unable to connect deployed MSSQL container in VM

Posted: 19 Mar 2021 04:12 PM PDT

can anyone help me on below issue.

I have added inbound rules with high priority, but still i am unable to communicate with MSSQL (1433) container deployed on Linux VM and unable to ssh.

getting below error

Network connectivity blocked by security group rule: DefaultRule_DenyAllInBound

Nginx with letsencrypt - duplicate value "TLSv1.2"

Posted: 19 Mar 2021 04:17 PM PDT

SSL test capped my result to B because of enabled TLS 1.0 and 1.1. I know I should add such line to my config: ssl_protocols TLSv1.2 TLSv1.3;

This is my minimized config:

server {      root /var/www/mezinamiridici.cz/html;        listen [::]:443 ssl ipv6only=on; # managed by Certbot      listen 443 ssl; # managed by Certbot          ssl_protocols TLSv1.2 TLSv1.3;          ssl_certificate /etc/letsencrypt/live/mezinamiridici.cz/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/mezinamiridici.cz/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot  }  

But there is an error:

2021/03/19 20:19:44 [warn] 32195#32195: duplicate value "TLSv1.2" in /etc/letsencrypt/options-ssl-nginx.conf:10  

coming probably of this Lets Encrypt config located at /etc/letsencrypt/options-ssl-nginx.conf:

# This file contains important security parameters. If you modify this file  # manually, Certbot will be unable to automatically provide future security  # updates. Instead, Certbot will print and log an error message with a path to  # the up-to-date file that you will need to refer to when manually updating  # this file.    ssl_session_cache shared:le_nginx_SSL:1m;  ssl_session_timeout 1440m;    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;  ssl_prefer_server_ciphers on;  

I tried to move my line above or below that import without luck. Is there a way to coexist both configurations?

Split Horizon DNS Causes Entire Network DNS Failure

Posted: 19 Mar 2021 06:35 PM PDT

The Problem

I have setup a Raspberry Pi 4 on my local network. This device is both a web-server and (now, by necessity) a DNS-server. This device needs to be reached from both inside, as well as outside the network. I have a public domain name being updated via DynDNS service that receives my router's public IP. When I use the domain that points to my IP, I then use port forwarding to route all incoming web traffic to the PI's private IP over port 80. When I use this configuration to then visit my domain in a browser (while connected to another network than the PI it on) It works!. That is; I go to the domain, it resolves as my routers public IP, port forwarding send the port 80 request to the PI, it serves the page. Voila!

The issue is when I try to reach this same page from inside the network. When I use this same domain name, it times-out and will not resolve (because it resolves as the public IP of my router, not the private IP of this web-server).

Using split-horizon DNS, my plan is to use this web-server as a DNS-server as well, to force all devices on my internal network to resolve this domain name as it's private IP rather than it's public IP. Once I leave my network the domain will be resolved by other name servers so it will resolve to the public IP (so it will be fine).

What I Have Done

  • I have setup the pi with no desktop and only CLI
  • I connected the pi to a LAN connection in my router
  • I have setup SSH and opened the ufw ports for port 22
  • Downloaded, installed and setup DNSMasq
  • Opened port 53 in my server firewall (ufw)
  • Went into the Router management page in the box given to me by my ISP. I pointed both of the primary and secondary DNS servers from their defaults to the private IP of my PI server
  • Copied both of the primary and secondary DNS server IP addresses from the ISP Router management page (previous step) and put them in my DNSMasq config file (found in /etc/dnsmasq.conf) under the key server=ip.ip.ip.ip. I am not going to give these ip's so this serves as a mock for how I entered both the primary and secondary, each on their own line right after the other
  • I Used the following DNSMasq config settings: domain-needed, bogus-priv, cache-size=750, log-queries, log-faciliy=/var/log/dnsmasq.log, server=isp.ip.primary.0, server=isp.ip.secondary.0, server=8.8.8.8, server=8.8.4.4, dhcp-mac, dhcp-reply-delay
  • There are no errors in my dnsmasq.log file
  • There are no errors in sudo service dnsmasq status
  • Cannot seem to get into my ISP router's logs. I can see them but theres so many that it crashes the entire webpage when trying to scroll through them all

Outcome

No device on my entire network can now resolve any domains at all until I change back to my default ISP primary/secondary

What am I missing or doing wrong??

When dig returns a single A record, but the IP changes between calls, what is this a sign of?

Posted: 19 Mar 2021 05:16 PM PDT

In my internal work network, whenever I launch dig against a particular hostname, I get result similar to this:

;; ANSWER SECTION:  some.internal.host.com.     10  IN  A   10.210.54.121  

If I keep spamming the same dig some.internal.host.com command, the response always has a single A record, but the IP address is changing between calls.

I assume it is some form of load-balancing, but the full list of IP addresses in the pool is hidden from inquisitive persons.

What could be the technique that is used here to achieve the described result?

How to set custom settings in resolv.conf when using system-resolvd

Posted: 19 Mar 2021 07:45 PM PDT

I would like to use option values in /etc/resolv.conf. From reviewing the relevant man page (https://linux.die.net/man/5/resolv.conf) I noticed that I can configure various options. For example one can set the timeout on dns requests using options timeout:10. Another example is to use attempts option to control the number of retries for DNS resolution. So far so good.

The problem is that systemd-resolvd edits the file. From what I can tell one needs to edit resolved.confg file to set the options that resolved will use. Relevant man page - https://jlk.fjfi.cvut.cz/arch/manpages/man/resolved.conf.5. The problem here is that the options that I want to set are not exposed in the resolved.conf file.

Is there a way to configure these parameters via systemd-resolved? Should I just edit /etc/resolve.conf and be done with it?

How to set name in command line when running `aws ec2 run-instances

Posted: 19 Mar 2021 04:03 PM PDT

How to give Name in command line when running aws ec2 run-instances ?

I don't find it in the official docs

screenshot

How to install libsrtp 1.5 on Centos 7?

Posted: 19 Mar 2021 10:02 PM PDT

yum install libsrtp  

Gives me v 1.4.4-10

How can I force install the >=1.5 version instead ?

Reset session is not working and RDS is currently busy

Posted: 19 Mar 2021 04:00 PM PDT

Applying reset session /server: example CMD after check query sessions, but still can't access to the remote server.

ERROR :

"The task you are trying to do can't be completed because Remote Desktop Service is currently busy. Please try again in a few minutes. Other users should still be able to log on."

Mikrotik: can't access to port from outside

Posted: 19 Mar 2021 06:05 PM PDT

I exposed web-service on my local machine to the external IP via Mikrotik and can access it via MY_EXTERNAL_IP:5000.

But my nginx-server can't access to MY_EXTERNAL_IP:5000. Logs:

14:09:53 firewall,info dstnat: in:bridge out:(none), src-mac 60:03:08:8c:a7:30, proto TCP (SYN), 192.168.1.19:50135->MY_EXTERNAL_IP:5000, len 64   14:09:55 firewall,info input: in:ether1 out:(none), src-mac 04:62:73:a2:55:49, proto TCP (SYN), 188.196.62.73:47850->MY_EXTERNAL_IP:5000, len 60   

Nginx error log:

[error] 2048#0: *434 connect() failed (111: Connection refused) while connecting to upstream, client: MY_EXTERNAL_IP, server: MY_DOMAIN.com, request: "GET / HTTP/1.1", upstream: "http://MY_EXTERNAL_IP:5000/", host: "MY_DOMAIN.com"  

Why the nginx-server can't access to MY_EXTERNAL_IP:5000 (It causes 502 error), otherwise I can from browser?

I suppose I need an additional Firewall Filter rule. Which one?

IIS rewrite language based rule and default language

Posted: 19 Mar 2021 08:02 PM PDT

I want to create an IIS rewrite rule that is based on the users browser language, but only for a specific set of languages.

Our website is available in English (en), French (fr), and Dutch (nl). I can create this rewrite rule:

<rule name="Redirect short url to long url: NEW SYNTAX 2017-11-01" stopProcessing="true">      <match url="^([_0-9a-z-]+)$" />      <conditions>          <add input="{HTTP_HOST}" pattern="mydomain\.be$" />          <add input="{HTTP_ACCEPT_LANGUAGE}" pattern="^(en|fr|nl)?" />      </conditions>      <action type="Redirect" url="https://www.myotherdomain.be/{C:1}/projects/{R:1}?type=shorturl" appendQueryString="false" redirectType="Found" />  </rule>  

This works fine when I configure my browser in one of the three specified languages (en/nl/fr). Eg URL http://mydomain.be/test will redirect to https://www.myotherdomain.be/nl/projects/test?type=shorturl (when my browser is configured in Dutch).

But when I configure my browser e.g. to "ru" then then the same url will redirect to https://www.myotherdomain.be//projects/test?type=shorturl

For any other languages I want to default the long URL to /en/ instead of // Is there any way to do this using IIS rewrite rules ?

Thanks in advance for any guidance!

Setting cache/expire time to every element in nginx?

Posted: 19 Mar 2021 10:02 PM PDT

when configuring web servers (nginx) is it uncommon to set and expire time and cache every element in every directory that gets requested by the client browser?

some examples of expire time i just found on the nginx site and servervault:

location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {      expires 30d;      add_header Pragma public;      add_header Cache-Control "public";  }      location ~* \.(?:css|gif|jpe?g|png)$ {      expires max;  }  

how would i write the location line if i wanted to experiement with setting an expiration to every element?

location ~* \.(?:*)$ {      expires 2d;      add_header Pragma public;      add_header Cache-Control "public"  }  

Rewriting facility/severity in rsyslog v7 before shipping off to a remote collector

Posted: 19 Mar 2021 04:00 PM PDT

I have a machine "A" with a local rsyslogd, and a remote collector machine "B" elsewhere listening with its own syslog daemon and log processing engine. It all works great...except that there is one process on A that logs at local0.notice, which is something that B's engine can't handle.

What I want to do is rewrite local0.notice to local5.info before the event is shipped off to B. Unfortunately I can't change B and I can't change the way the process does it's logging on A. Nor can I upgrade rsyslogd on A from v7.6 to v8 (which appears to have some very useful-looking features, like mmexternal, which might have helped).

I think I must be missing something obvious, I can't be the first person to need this type of feature. Basically it comes down to finding some way of passing through rsyslog twice with a filter in between: once as the process logs, through the filter to change the prio, and then again to forward it on.

What I've tried:

  • configuring rsyslog to log local0.notice to a file, and then reading that file with an imfile directive that tags it and sets the new fac/sev, followed by an if statement that looks for the tag and calls an omfwd action. I thought perhaps I could persuade rsyslog to write a file at the right prio and then have rsyslog come back around and naturally pick it up. Sadly, no dice.
  • loading an omprog module that calls logger -p local5.info if syslogfacility-text == 'local0', stopping processing there...and then having another config element check for syslogfacility-text == 'local5' and if so calling an omfwd action. Strangely this works but doesn't squash the original messages, now I just get two sets of logs being forwarded to B, one local0 and one local5.

Are there any solutions out there?

How to redirect full url to subdomain?

Posted: 19 Mar 2021 05:07 PM PDT

I am trying to redirect a site page to a subdomain (same url) with .htaccess but it's stuck in a redirect loop or redirected to 404 page. The following is my code used in .htaccess:

RewriteRule ^uk.example.com/2-day-course.html http://www.example.com/2-day-course.html [L,R=301,NC]  

Some of the URLs need to be redirected from subdomain to main domain on the same page, it has the same issue like above.

Is there a way to do it with .htaccess file?

Reset subscription or fix web app

Posted: 19 Mar 2021 06:05 PM PDT

I'm trying to set up a web app, but I keep on getting errors.

If I try in the portal I keep on seeing that the status is "deleted" and the deployment failed because application insights is not supported in my region.

I do not need application insights.

In Visual Studio I get the following error

--------------------------- Microsoft Visual Studio --------------------------- Following errors occured during the deployment:

Error during deployment for resource 'AppInsightsComponents MySite' in resource group 'MegaSale': MissingRegistrationForLocation: The subscription is not registered for the resource type 'components' in the location 'Central US'. Please re-register for this provider in order to have access to this location..

Error during deployment for resource 'MySite' in resource group 'MegaSale': NoRegisteredProviderFound: No registered resource provider found for location 'West Europe' and API version '2.0' for type 'servers'. The supported api-versions are '2014-01-01, 2014-04-01, 2014-04-01-preview'. The supported locations are 'centralus, eastus, westus, southcentralus, eastus2, northcentralus, eastasia, southeastasia, japanwest, japaneast, northeurope, westeurope, brazilsouth, australiaeast, australiasoutheast, centralindia, westindia, southindia, canadacentral, canadaeast, westus2, westcentralus, uksouth, ukwest'..

and this occurs no matter which region I choose.

I would like to use Western Europe, but can accept a different region if it would just work.

I don't mind scraping my whole subscription and starting anew, though I'd rather not if possible.

The resource group I certainly don't mind trashing totally.

Does upstart have a built in way to respawn a process that is not stopped using initctl stop in specific intervals?

Posted: 19 Mar 2021 09:05 PM PDT

We thought the following directives would try to respawn the process 10 times in 60 seconds (i.e. once every 6 seconds):

respawn  respawn limit 10 60  

However these directives restart the process as soon as it is crashed. So it might actually respawn the process 10 times in 1 second.

Is there a way to configure our service so that when it crashes, it tries ti respawn it 10 times, once every 6 seconds?

Apache 2.2 reverse proxy connection to tomcat hangs then returns 502 Proxy Error

Posted: 19 Mar 2021 09:05 PM PDT

I am running:

Apache/2.2.15 (Unix) on CentOS release 6.6 (Final)  

I am running a reverse proxy that forwards requests on to a tomcat server running on localhost:8080. My config looks like this:

<VirtualHost *:80>  ....      <Proxy *>      Order deny,allow      Allow from all      AuthType Basic      AuthName "Username/Password"      AuthUserFile /etc/httpd/conf.d/users.auth      Require valid-user    </Proxy>      Setenv proxy-nokeepalive 1    Setenv proxy-sendcl 1    Setenv proxy-initial-not-pooled 1    SetEnv force-proxy-request-1.0 1      ProxyPass /analytics http://localhost:8080/analytics disablereuse=on ttl=1 smax=0    ProxyPassReverse /analytics http://localhost:8080/analytics  

You can see I've chucked a whole load of stuff in there out of desperation - I've been trying to get this to work for several days.

Most of the queries work fine, but it seems particularly if I don't use the server for perhaps more than 15 minutes, when I return I get hanging, but it seems only on a POST request on the pages served from tomcat. The hang usually last 1 minute before the 502 error. Also I have seen it on GET pages but not recently, so it may be that some of the configuration changes I made changed that.

I've enabled debugging in the log and get:

[Sat Jan 30 21:28:49 2016] [error] [client 192.168.213.24] (70007)The timeout specified has expired: proxy: error reading status line from remote server localhost, referer: http://analytics.mysite.com/analytics/msqlQuery  [Sat Jan 30 21:28:49 2016] [debug] mod_proxy_http.c(1414): [client 192.168.213.24] proxy: read timeout, referer: http://analytics.mysite.com/analytics/msqlQuery  [Sat Jan 30 21:28:49 2016] [debug] mod_proxy_http.c(1465): [client 192.168.213.24] proxy: NOT Closing connection to client although reading from backend server localhost failed., referer: http://analytics.mysite.com/analytics/msqlQuery  [Sat Jan 30 21:28:49 2016] [error] [client 192.168.213.24] proxy: Error reading from remote server returned by /analytics/msqlQuery, referer: http://analytics.mysite.com/analytics/msqlQuery  [Sat Jan 30 21:28:49 2016] [debug] proxy_util.c(2112): proxy: HTTP: has released connection for (localhost)  

When I get the hanging behaviour, I don't see anything show up in the tomcat logs at all. I suspect that the call is never getting that far.

It's as if Apache thinks the connection is alive, but it's not somehow (even though I think I've configured it to get a fresh connection every time)

Many thanks for any assistance!

Debug slow lan (ssh, nfs) file transfer

Posted: 19 Mar 2021 07:01 PM PDT

I've got two linux boxes attached to a gigabit switch. They both have gigabit NICs, cables are cat7.

Testing the network with iperf shows a fast connection but transferring files with rsync, scp, or nfs share is slow.

I'm testing with one 1GB file.

iperf result:

Client connecting to odroid, TCP port 5001  TCP window size: 85.0 KByte (default)  ------------------------------------------------------------  [  3] local 192.168.1.26 port 58788 connected with 192.168.1.32 port 5001  [ ID] Interval       Transfer     Bandwidth  [  3]  0.0-10.0 sec   979 MBytes   821 Mbits/sec  

The transfer speed with rsync, scp or nfs is all about ~13Mb/s

scp:

 scp bigfile odroid:/mnt/usb1/               [10:19:12]  bigfile                                        57%  590MB  12.2MB/s   00:35 ETA^CKilled by signal 2.  

rsync:

 rsync --progress bigfile /mnt/usb1/  bigfile       44,695,552   4%   12.15MB/s    0:01:11  ^C  

nfs:

binaryplease➜~(master✗)» time cp bigfile /mnt/nfs/usb1/        cp -i bigfile /mnt/nfs/usb1/  0.01s user 0.94s system 1% cpu 1:11.06 total  

1024MB/71sec = 14,42 MB/s

Since the iperf test shows a fast network connection, I assumed a problem with the storage devices being slow, but that doesn't seem to be the case either:

Client, SSD, internal:

binaryplease➜~(master✗)» sudo hdparm -tT /dev/sda                      /dev/sda:   Timing cached reads:   20344 MB in  2.00 seconds = 10181.50 MB/sec   Timing buffered disk reads: 1498 MB in  3.00 seconds = 498.98 MB/sec    binaryplease➜~(master✗)» dd if=/dev/zero of=test oflag=direct bs=8M count=64  64+0 records in  64+0 records out  536870912 bytes (537 MB) copied, 2.03861 s, 263 MB/s    binaryplease➜~(master✗)» dd if=test of=/dev/null iflag=direct bs=8M  [12:29:01]  64+0 records in  64+0 records out  536870912 bytes (537 MB) copied, 1.11392 s, 482 MB/s  

Server, USB 3.0 Drive, external:

➜  ~ git:(master) ✗ sudo hdparm -tT /dev/sda     /dev/sda:   Timing cached reads:   1980 MB in  2.00 seconds = 991.66 MB/sec   Timing buffered disk reads: 266 MB in  3.01 seconds =  88.27 MB/sec  ➜  usb1   dd if=/dev/zero of=test oflag=direct bs=8M count=64  64+0 records in  64+0 records out  536870912 bytes (537 MB) copied, 6.53386 s, 82.2 MB/s  ➜  usb1  dd if=test of=/dev/null iflag=direct bs=8M  64+0 records in  64+0 records out  536870912 bytes (537 MB) copied, 7.13567 s, 75.2 MB/s  

OS on client (Linux arch):

Linux binaryplease-laptop 4.3.3-2-ARCH #1 SMP PREEMPT Wed Dec 23 20:09:18 CET 2015 x86_64 GNU/Linux  

OS on server (Ubuntu server for odroid):

Linux odroid 3.10.92 #1 SMP PREEMPT Tue Nov 17 00:15:24 BRST 2015 armv7l armv7l armv7l GNU/Linux  

On both systems neither the cpu or the ram is maxed out.

If I interpret the results correctly the write speed of the servers drive (82.2 MB/s) should be easily matched by the network. How is the file transfer so slow?

I hope the information provided is sufficient and someone can help me find the bottleneck.

Thanks.

Authentication and logging of users for a Wireless ISP?

Posted: 19 Mar 2021 05:19 PM PDT

I have to upgrade a Wireless ISP's (WISP) network. Their current setup consists of a router (Mikrotik RouterBoard 1100AHx2), Ubiquiti Rockets (with sector antennas) for clients, and Ubiquiti NanoStations for client CPEs.

Their security consists of WPA2-PSK for the CPEs, and they dial PPPoE to provide access. PPPoE makes it trivial to control users, disconnect them, wall-garden them in case they don't pay, etc.

But PPPoE is always problematic in other aspects (MTU issues, tunnels randomly dropping, etc). So I want to keep things as pure as possible: no tunneling of any sort, just bare ethernet.

Authentication can be solved easily with 802.1x (EAP) which all devices support just fine. Then it's just a matter of assigning IP addresses with DHCP (and DHCPv6 even).

But my problem is that 802.1x authentication is based on user+password, while DHCP only uses MAC. So, I need a way to provide an IP from a specific pool to every type of user - Freeradius can act as a DHCP server and do this, but it's not possible to use the 802.1x credentials for DHCP - or at least, I haven't found a way to do this.

What options do I have to accomplish this? New hardware is not an option, the solution has to be as FOSS as possible, and run on Linux or FreeBSD.

Apache: virtual host and mod_fastcgi -- how does it work?

Posted: 19 Mar 2021 08:02 PM PDT

I read this article to set up virtual host with mod_fastcgi, but I don't quite understand the following configuration:

FastCgiExternalServer /var/www/php5.external -host 127.0.0.1:9000  AddHandler php5-fcgi .php  Action php5-fcgi /usr/lib/cgi-bin/php5.external  Alias /usr/lib/cgi-bin/ /var/www/  

Can someone explain this?

Edit: What confused me is that why alias is used here. Why not use /var/www/php5.external in Action?

OpenVPN connection breaks after a few seconds

Posted: 19 Mar 2021 05:07 PM PDT

For the last hours I have been trying to connect to a VPN network through openVPN. My connection settings are:

IPv4 tab:

Method: Automatic VPN  Routes: Use this connection only for resources on its network.  

Then on the VPN tab:

fw01.vcloud.kirkeweb.dk  Type: Password with Certificates(TLS)  

Under ADVANCED:

In General tab:    Use custom gateway port is checked    Use a TCP connection is checked  In TLS Authentication:    Use TLS Authentication is checked.  

The log is:

Aug  7 14:41:42 aegir NetworkManager[1188]: <info> Starting VPN service 'openvpn'...  Aug  7 14:41:42 aegir NetworkManager[1188]: <info> VPN service 'openvpn' started (org.freedesktop.NetworkManager.openvpn), PID 27429  Aug  7 14:41:42 aegir NetworkManager[1188]: <info> VPN service 'openvpn' appeared; activating connections  Aug  7 14:41:42 aegir NetworkManager[1188]: <info> VPN plugin state changed: starting (3)  Aug  7 14:41:42 aegir NetworkManager[1188]: <info> VPN connection 'fw01-TCP-8081-gc-config-v2' (Connect) reply received.  Aug  7 14:41:42 aegir nm-openvpn[27435]: OpenVPN 2.2.1 x86_64-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Mar 13 2014  Aug  7 14:41:43 aegir nm-openvpn[27435]: Control Channel Authentication: using '/home/lebowski/Desktop/KWvpn/fw01-TCP-8081-gc/fw01-TCP-8081-gc-tls.key' as a OpenVPN static key file  Aug  7 14:41:50 aegir NetworkManager[1188]: <info> VPN connection 'fw01-TCP-8081-gc-config-v2' (IP Config Get) reply received.  Aug  7 14:41:50 aegir NetworkManager[1188]: <info> VPN Gateway: 46.29.101.168  Aug  7 14:41:51 aegir NetworkManager[1188]: <info> VPN connection 'fw01-TCP-8081-gc-config-v2' (IP Config Get) complete.  Aug  7 14:41:51 aegir NetworkManager[1188]: <info> VPN plugin state changed: started (4)  Aug  7 14:42:10 aegir NetworkManager[1188]: <warn> VPN plugin failed: 2  Aug  7 14:42:10 aegir NetworkManager[1188]: <warn> VPN plugin failed: 1  Aug  7 14:42:10 aegir NetworkManager[1188]: <info> VPN plugin state changed: stopped (6)  Aug  7 14:42:10 aegir NetworkManager[1188]: <info> VPN plugin state change reason: 0  Aug  7 14:42:10 aegir NetworkManager[1188]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active.  Aug  7 14:42:17 aegir NetworkManager[1188]: <info> VPN service 'openvpn' disappeared  

The SSL certificate doesn't established

Posted: 19 Mar 2021 07:01 PM PDT

situation following: Windows Server 2008 R2 platform. Certificate installation in the IIS Manager occurs successfully with *.cer file but if I refresh the manager (F5), the certificate vanishes from the list. And, respectively in the Bindings window, at https addition, the certificate is absent in the menu. Thus if to open certificates via the MMS console, it can be seen in the Personal store. Whether there is any possibility to make so that the web server could "see" this certificate or how to make so that it didn't disappear from the list? Prompt how to solve this problem, thanks in advance!

P.S. The certificate is acquired in tawte. In total that to me provided, these are account data where it is possible simply with save-pastit the certificate in 2 options: PKCS#7 and X.509. Here is the manual I used. P.S.2 If Complete Certificate Request with *.p7b I get an error:

Cannot find the certificate request that is associated with this certificate file. Acertificate request must be comleted on the computer where the request was created.

No comments:

Post a Comment