Monday, January 24, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Blocking Symbols on Body Aws Wad

Posted: 24 Jan 2022 12:29 AM PST

Hello I want to block string in the body that contains "&" character for example but It doesnt´t work

I tried to use Html decode Text transformation and also doesnt´t work, if I try with a word for example "phone" and I include this word in the body works perfect, Why is not working with html symbols?

Waf Configuration

Which font package in Linux supports small hyphen minus font [migrated]

Posted: 23 Jan 2022 11:19 PM PST

I wanted to know which font package in Linux supports "small hyphen minus" font. I tried searching all around but unable to find a suitable font package for it.

Worker roles missing on new RKE cluster on Ubuntu

Posted: 23 Jan 2022 10:43 PM PST

I've installed my first RKE cluster on Ubuntu-20.04.3 I followed the quickstart guide, and configured 1 controller and 2 workers.

root@tk8sc1:~# /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes  NAME     STATUS   ROLES                       AGE     VERSION  tk8sc1   Ready    control-plane,etcd,master   2d13h   v1.22.5+rke2r1  tk8sw1   Ready    <none>                      15h     v1.22.5+rke2r1  tk9sw2   Ready    <none>                      15h     v1.22.5+rke2r1  

As you can see the worker roles have not been applied. I read the troubleshooting page, which says to check whether the kubelet+kube-proxy containers are running, they are not and neither are the images defined, but the page doesn't mention what to do next. I'm not sure what I've missed or what I should do next, I'd appreciate any help for my next steps.

root@tk8sw1:~# docker ps -a -f=name='kubelet|kube-proxy'  CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES  root@tk8sw1:~#  

Nginx http to https redirects download empty file instead

Posted: 23 Jan 2022 10:07 PM PST

Im trying to redirect my Laravel+vuejs+nuxtjs project from http to https but when I enter http://example.com or http://www.example.com an empty file being downloaded instead

What have I done so far :

1- Commenting default_type application/octet-stream and adding default_type text/html instead in nginx.conf

2-defining types { } default_type "text/plain"; in location /{} of the example.com.conf

3-nginx redirect with the code below

server{    listen xx.xx.xx.xx:80;      server_name example.com www.example.com;    return 301 https://www.example.com$request_uri;  }  

4- tried to redirect it with a .php file with the following example.com.conf file:

       server {            listen 37.152.191.249:80;            server_name www.example.com example.com;             access_log /usr/local/apache/domlogs/example.com.bytes bytes;          access_log /usr/local/apache/domlogs/example.com.log combined;          error_log /usr/local/apache/domlogs/example.com.error.log error;            root /home/example/public_html/;          index index.php;       location / {  types { } default_type "text/plain";      try_files                       $uri $uri/  /index.php?$query_string;    }    location ~ \.php$ {     fastcgi_split_path_info         ^(.+\.php)(/.+)$;      fastcgi_pass                    127.0.0.1:9000;      fastcgi_index                   index.php;      include                         fastcgi_params;      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;    fastcgi_intercept_errors        off;      fastcgi_buffer_size             16k;      fastcgi_buffers                 4 16k;      fastcgi_connect_timeout         300;     fastcgi_send_timeout            300;      fastcgi_read_timeout            300;  }              location ~* "/\.(htaccess|htpasswd)$" {deny all;return 404;}            disable_symlinks if_not_owner from=/home/example/public_html;  }  

the index.php in public_html code :

    $location = 'https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'];      header('HTTP/1.1 301 Moved Permanently');      header('Location: ' . $location);      exit;  

None of the above worked and the problem still presist.

+Current Configurations :

nginx -t report :

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok  nginx: configuration file /etc/nginx/nginx.conf test is successful  

example.com.ssl.conf :

server{    listen xx.xx.xx.xx:443 http2 ssl;      server_name example.com;          ssl_certificate      /etc/pki/tls/certs/example.com.bundle;          ssl_certificate_key  /etc/pki/tls/private/example.com.key;          ssl_protocols TLSv1.2;          ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EE3CDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA!RC4:EECDH:!RC4:!aNULL:!eN$          ssl_prefer_server_ciphers   on;           ssl_session_cache   shared:SSL:10m;         ssl_session_timeout 60m;         return 301 https://www.example.com$request_uri;  }         server {            listen xx.xx.xx.xx:443 http2 ssl;            server_name www.example.com;             access_log /usr/local/apache/domlogs/example.com.bytes bytes;          access_log /usr/local/apache/domlogs/example.com.log combined;          error_log /usr/local/apache/domlogs/example.com.error.log error;          ssl_certificate      /etc/pki/tls/certs/example.com.bundle;          ssl_certificate_key  /etc/pki/tls/private/example.com.key;          ssl_protocols TLSv1.2;          ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA!RC4:EECDH:!RC4:!aNULL:!eN$          ssl_prefer_server_ciphers   on;            ssl_session_cache   shared:SSL:10m;          ssl_session_timeout 60m;            root /home/example/core/public/;          index index.php;       location / {    proxy_set_header                Connection 'upgrade';      proxy_http_version              1.1;      proxy_pass                      https://xx.xx.xx.xx:3000$uri;      proxy_intercept_errors          on;# In order to use error_page directive this needs to be on      error_page                      404 = @php;  }    location @php {      try_files                       $uri $uri/  /index.php?$query_string;    }    location ~ \.php$ {     fastcgi_split_path_info         ^(.+\.php)(/.+)$;      fastcgi_pass                    127.0.0.1:9000;      fastcgi_index                   index.php;      include                         fastcgi_params;      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;     fastcgi_intercept_errors        off;      fastcgi_buffer_size             16k;      fastcgi_buffers                 4 16k;      fastcgi_connect_timeout         300;     fastcgi_send_timeout            300;      fastcgi_read_timeout            300;  }              location ~* "/\.(htaccess|htpasswd)$" {deny all;return 404;}            disable_symlinks if_not_owner from=/home/example/public_html;            location /.well-known/acme-challenge {                  default_type "text/plain";                  alias /usr/local/apache/autossl_tmp/.well-known/acme-challenge;          }            location /.well-known/pki-validation {                  default_type "text/plain";                  alias /usr/local/apache/autossl_tmp/.well-known/acme-challenge;          }  }      

Current example.com.conf :

server{    listen xx.xx.xx.xx:80;      server_name example.com www.example.com;    return 301 https://www.example.com$request_uri;  }  

I have not added the nginx -T report since it shows irrelevant configuration files from other websites.

Also server running multiple sites and the wordpress ones have no problem redirecting using the code provided at #3 for redirect but when it comes to THE site that uses nuxtjs , I get a empty file downloaded instead.

Any help would be highly appreciated

Mailgun : Sending a message from a subdomain to main domain gets denied

Posted: 23 Jan 2022 11:23 PM PST

I've set up a subdomain in mailgun to send transactional emails from an application (mq.domain_name). This seems to work fine, as long as a receipt's domain is other than the main domain we own, for example user@domain_name.

In this case I am starting to get a 4.1.8 <bounce+ewr46-username=domain_name@mg.domain_name>: Sender address rejected: Domain not found response. And the error code is 450.

Currently, we have subdomain's dns records point to the mailgun and DNS's MX record for the main domain are set to point to the outlook, as this is how we are reading emails. Also, we haven't set a subdomain's MX dns record as we don't plan on receiving emails on the subdomain.

So is it possible to send an email from a subdomain to the main domain? If so, how could I do it? And if I set a MX record for the subdomain will I still be able to receive emails on my main domain?

kubeadm upgrade fails checking etcd

Posted: 23 Jan 2022 09:52 PM PST

I have a running controll plane consisting of 3 master nodes and everything including etcd appears to work well. I tried to upgrade v1.21.1 -> v1.21.8 to familiarize w the k8s upgrade procedure. After "kubeadm plan" I went ahead:

kubeadm upgrade apply v1.21.8 --certificate-renewal=false --ignore-preflight-errors=CoreDNSUnsupportedPlugins,CoreDNSMigration --config=/root/kubeadm-upgrade.conf -f

The kubeadm-upgrade.conf is just to move to another private registry for pulling the images, which works perfectly. Ignoring the coredns errors because coredns is actually deployed as a daemonset while kubeadm expects a deployment.

Unfortunately the process stopps when checking the etcd connection

kubeadm upgrade apply v1.21.8 --certificate-renewal=false --ignore-preflight-errors=CoreDNSUnsupportedPlugins,CoreDNSMigration --config=/root/kubeadm-upgrade.conf -f -v=5 I0111 09:41:38.562596 48211 apply.go:112] [upgrade/apply] verifying health of cluster I0111 09:41:38.562902 48211 apply.go:113] [upgrade/apply] retrieving configuration from cluster [upgrade/config] Making sure the configuration is correct: W0111 09:41:38.566434 48211 common.go:94] WARNING: Usage of the --config flag with kubeadm config types for reconfiguring the cluster during upgrade is not recommended! I0111 09:41:38.567994 48211 initconfiguration.go:115] detected and using CRI socket: /var/run/dockershim.sock I0111 09:41:38.568421 48211 interface.go:431] Looking for default routes with IPv4 addresses I0111 09:41:38.568436 48211 interface.go:436] Default route transits interface "ens192" I0111 09:41:38.569871 48211 interface.go:208] Interface ens192 is up I0111 09:41:38.569968 48211 interface.go:256] Interface "ens192" has 2 addresses :[10.12.83.145/27 fe80::250:56ff:fe83:f0e9/64]. I0111 09:41:38.570006 48211 interface.go:223] Checking addr 10.12.83.145/27. I0111 09:41:38.570017 48211 interface.go:230] IP found 10.12.83.145 I0111 09:41:38.570030 48211 interface.go:262] Found valid IPv4 address 10.12.83.145 for interface "ens192". I0111 09:41:38.570040 48211 interface.go:442] Found active IP 10.12.83.145 I0111 09:41:38.779650 48211 version.go:185] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt W0111 09:41:38.783665 48211 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": dial tcp 34.107.204.206:443: connect: connection refused W0111 09:41:38.783692 48211 version.go:103] falling back to the local client version: v1.21.8 I0111 09:41:38.783878 48211 common.go:163] running preflight checks [preflight] Running pre-flight checks. I0111 09:41:38.783933 48211 preflight.go:80] validating if there are any unsupported CoreDNS plugins in the Corefile [WARNING CoreDNSUnsupportedPlugins]: start version '' not supported I0111 09:41:38.812763 48211 preflight.go:108] validating if migration can be done for the current CoreDNS release. [WARNING CoreDNSMigration]: CoreDNS will not be upgraded: start version '' not supported [upgrade] Running cluster health checks I0111 09:41:38.825780 48211 health.go:162] Creating Job "upgrade-health-check" in the namespace "kube-system" I0111 09:41:38.857408 48211 health.go:192] Job "upgrade-health-check" in the namespace "kube-system" is not yet complete, retrying I0111 09:41:39.862643 48211 health.go:192] Job "upgrade-health-check" in the namespace "kube-system" is not yet complete, retrying ...

the log of the etcd of that node complains, as far as I can see, about the client cert

2022-01-11 08:41:25.916730 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2022-01-11 08:41:35.929543 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2022-01-11 08:41:45.920101 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2022-01-11 08:41:54.371333 I | embed: rejected connection from "10.12.83.145:47034" (error "remote error: tls: bad certificate", ServerName "") 2022-01-11 08:41:54.380537 I | embed: rejected connection from "10.12.83.145:47042" (error "remote error: tls: bad certificate", ServerName "") 2022-01-11 08:41:54.393896 I | embed: rejected connection from "10.12.83.145:47046" (error "remote error: tls: bad certificate", ServerName "") 2022-01-11 08:41:55.386489 I | embed: rejected connection from "10.12.83.145:47080" (error "remote error: tls: bad certificate", ServerName "")

while etcd communication usually works, as one can see in the first lines of the lo excerpt, kubeadm obviously fails or uses a not working cert. I already changed etcd to irnore client autentication by starting it w/o " - --client-cert-auth=true" .... unfortunately the issue remains :(

I wonder what kubeadm does different when checking the etcd connection and if I can somehow configure kubeadm by a flag or in the config to use se correct cert?

setup rule for alias domain

Posted: 23 Jan 2022 08:49 PM PST

I have microsoft office 365 for business. I have a main domain: domainA.com and an alias domain: domainB.com

I have setup a shared mailbox: support@ which accepts email on both domauins.

Now i want to setup a rule for each domain to redirect to different inboxes. So domainA.dom will have one redirect rule, and domainB.com will have another rule.

The problem is; when I add a rule to filter all incoming email with the recipient support@domainB.com and save. It will remove the domain part of the rule and only match the prefix support@. So it seems i can not have any rules that check the domain part of the recipient.

Does anyone know how to create a rule that will match the whole email address. The prefix and the domain part. With the domain being the main domain or an alias/addon domain to the 365 account.

Does my logging have redundant network info by including all reply fields?

Posted: 23 Jan 2022 08:42 PM PST

I am logging conntrack events from kernel on my router to fluentd and I've been thinking if I'm logging too much info. The fields:

src  dst  proto  spt (source port)  dpt (destination port)  pkts (packets)  bytes (bytes)  r_src (remote source)  r_proto (remote protocol)  r_spt (remote source port)  r_dpt (remote destination port)  r_pkts (remote packets)  r_bytes (remote bytes)  

Logging looks like

time:24-01-2022 04:08:31:242 src:192.168.0.101 dst:8.8.8.8 dst_geo:US,Mountain View dst_geo_asn:GOOGLE dst_host:redacted.net proto:TCP spt:51362 dpt:443 pkts:11 bytes:2014 r_src:8.8.8.8 r_src_host:redacted.net r_src_geo:US,Mountain View r_src_asn:GOOGLE r_dst:100.64.0.32.1 r_proto:TCP r_spt:443 r_dpt:51362 r_pkts:12 r_bytes:2208  

I'm thinking I should keep src, dst and r_src, r_dst as it will not always match due to NAT.

But from what I can tell, I can safely remove:

r_pkts  r_bytes  r_spt  r_dpt  pkts  bytes  

As this info will always be represented from the source inside my LAN, is that correct?

Unable to use fwmark on Debian 11 (bulleyes) to change routing behavior

Posted: 24 Jan 2022 12:38 AM PST

I have a recipe I already use on many cases, but this time doesn't works on Debian 11 (kernel 5.10.0-10-amd64)

my setup is basically an internal interface eth0 for a RFC1918 LAN, and two external interfaces connected to some ISP's Box:

eth1 for ISP1 as default router at 10.0.0.254 with public IP 1.2.3.4 (figuratively)

eth2 for ISP2 has a router at 10.0.3.254 with public 2.3.4.5

I have different possible route. I want to control which route my packet takes, so I create some rule and fwmark. First I append 2<tab>secondrouter in /etc/iproute2/rt_tables

ip rule add fwmark 0x3 lookup secondrouter  ip route add default via 10.0.3.254 table secondrouter  

everythings is fine regarding ip route list table secondrouter and ip rule list

at this time I am able to do:

curl -4 ifconfig.me  1.2.3.4 #<- public ip address of my default route  

Then I do

iptables-legacy -t mangle -A OUTPUT -d 34.117.59.81 -j MARK --set-mark 0x3   

now if I do

curl -4 ifconfig.me  <timeout>  

Where I expected 2.3.4.5 as public IP. So clearly the marked packet do not take the route from the ip route table, worse, it timeouts.

If I do this exactly the same way on older Debian, its works perfectly.

NB:if I do a

ip route add 34.117.59.81 via <second router IP>  

my curl test works perfectly as expected

curl -4 ifconfig.me  2.3.4.5 #<- Pub ip address of my second router  

My problem occurs when using iptables or iptables-legacy to mark packets to route. BTW I have plenty of iptables rules that works fine, so it do not looks like an iptables issue.

iptables-legacy-save   # Generated by iptables-save v1.8.7 on Sun Jan 23 22:35:06 2022  *mangle  :PREROUTING ACCEPT [41:5019]  :INPUT ACCEPT [41:5019]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [44:4752]  :POSTROUTING ACCEPT [44:4752]  -A OUTPUT -d 192.168.0.0/16 -j RETURN  -A OUTPUT -d 172.16.0.0/12 -j RETURN  -A OUTPUT -d 10.0.0.0/8 -j RETURN  -A OUTPUT -d 34.117.59.81/32 -j MARK --set-xmark 0x3/0xffffffff  COMMIT  # Completed on Sun Jan 23 22:35:06 2022  # Generated by iptables-save v1.8.7 on Sun Jan 23 22:35:06 2022  *filter  :INPUT DROP [19:7746]  :FORWARD DROP [0:0]  :OUTPUT ACCEPT [846:62420]  -A INPUT -m state --state INVALID -j DROP  -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT  -A INPUT -i lo -j ACCEPT  -A INPUT -p icmp -j ACCEPT  -A INPUT -p udp -m udp --dport 500 -j ACCEPT  -A INPUT -p esp -j ACCEPT  -A INPUT -p ah -j ACCEPT  -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT  -A INPUT -s 192.168.0.0/16 -p tcp -m tcp --dport 3128 -j ACCEPT  -A INPUT -s 172.16.0.0/12 -p tcp -m tcp --dport 3128 -j ACCEPT  -A INPUT -s 10.0.0.0/8 -p tcp -m tcp --dport 3128 -j ACCEPT  -A FORWARD -m state --state INVALID -j DROP  -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT  -A FORWARD -s 192.168.0.0/16 -d 10.0.6.0/24 -j ACCEPT  -A FORWARD -s 10.0.6.0/24 -d 192.168.0.0/16 -j ACCEPT  COMMIT  # Completed on Sun Jan 23 22:35:06 2022  

AWS - Adding multiple IPs to Security Group Inbound Rules

Posted: 23 Jan 2022 09:21 PM PST

I need to open 20 ports for 12 IP blocks.

Do I have to manually add 240 rules in this case? I feel like there must be a way to just copy&paste the IP list to somewhere.

I googled and found it's not possible, but it's hard to believe. https://forums.aws.amazon.com/thread.jspa?threadID=191133

Dnsmasq how to make captive portal pop up

Posted: 24 Jan 2022 12:04 AM PST

i am trying to implement captive portal with dnsmasq. In dnsmasq config address=/#/10.42.0.1 doesn't work, so i have to use ugly

address=/com/10.42.0.1  address=/uk/10.42.0.1  address=/org/10.42.0.1  address=/gov/10.42.0.1 ...  

This redirects all listed domains fine, if in browser you go anywhere, however captive portal browser doesn't pop up by itself (checked no mac win and linux), and there is problem if site redirects to https (like facebook), my portal page is http only.

So how should it be setup correcltly to replace all domain names or even just make browser pop up with captive portal page?

UPD: acording to man page

--address=/#/1.2.3.4 will always return 1.2.3.4 for any query not answered from /etc/hosts or DHCP and not sent to an upstream nameserver by a more specific --server directive.

So how can i make sure there is no upstream hosts for the NetworkManager - dnsmasq?

coredns will not start on the vagrant ubuntu/impish64 image but will start and run successfully with ubuntu/bionic

Posted: 24 Jan 2022 12:20 AM PST

I am trying to run k8s on ubuntu/impish64. I have a reference env that is successfully running ubuntu/bionic. The only differences between the environments is the ubuntu image and ip address ranges

This is the Bionic output:

vagrant@ubuntu-bionic:~$ kubectl get pods -n kube-system -o wide  NAME                               READY   STATUS    RESTARTS       AGE    IP              NODE       NOMINATED NODE   READINESS GATES  coredns-64897985d-bk74z            1/1     Running   2 (3d2h ago)   4d8h   10.244.0.6      machine1   <none>           <none>  coredns-64897985d-jhghl            1/1     Running   2 (3d2h ago)   4d8h   10.244.0.7      machine1   <none>           <none>  etcd-machine1                      1/1     Running   2 (3d2h ago)   4d8h   192.168.33.20   machine1   <none>           <none>  kube-apiserver-machine1            1/1     Running   2 (3d2h ago)   4d8h   192.168.33.20   machine1   <none>           <none>  kube-controller-manager-machine1   1/1     Running   2 (3d2h ago)   4d8h   192.168.33.20   machine1   <none>           <none>  kube-flannel-ds-7c6mh              1/1     Running   2 (3d2h ago)   4d8h   192.168.33.20   machine1   <none>           <none>  kube-proxy-r982l                   1/1     Running   2 (3d2h ago)   4d8h   192.168.33.20   machine1   <none>           <none>  kube-scheduler-machine1            1/1     Running   2 (3d2h ago)   4d8h   192.168.33.20   machine1   <none>           <none>  

This is the impish output:

vagrant@ubuntu-impish:~$ kubectl get pods -n kube-system -o wide  NAME                               READY   STATUS    RESTARTS   AGE   IP          NODE       NOMINATED NODE   READINESS GATES  coredns-64897985d-bd47f            0/1     Pending   0          45m   <none>      <none>     <none>           <none>  coredns-64897985d-s9bct            0/1     Pending   0          45m   <none>      <none>     <none>           <none>  etcd-machine1                      1/1     Running   0          45m   10.0.2.15   machine1   <none>           <none>  kube-apiserver-machine1            1/1     Running   0          45m   10.0.2.15   machine1   <none>           <none>  kube-controller-manager-machine1   1/1     Running   0          45m   10.0.2.15   machine1   <none>           <none>  kube-proxy-2sknk                   1/1     Running   0          45m   10.0.2.15   machine1   <none>           <none>  kube-scheduler-machine1            1/1     Running   0          45m   10.0.2.15   machine1   <none>           <none>  

Two potentially telling clues are:

Jan 21 09:22:21 ubuntu-impish kubelet[8744]: I0121 09:22:21.657003    8744 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.d"    Jan 21 09:22:24 ubuntu-impish kubelet[8744]: E0121 09:22:24.425056    8744 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"  

Attempting to install flannel fails with the following:

vagrant@ubuntu-impish:~$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?  

Any Suggestions?

OpenVPN Client doesn't connect to my own server, but receives packets according to tcpdump

Posted: 23 Jan 2022 09:09 PM PST

My problem is that I cant connect to my OpenVPN Server. I always get a "TLS key negotiation failed to occur within 60 seconds (check your network connectivity)" error. Running tcpdump while trying to connect on port 1194 on my server showed 4 packets from my PC.

My server.conf in /etc/openvpn/server:

# OpenVPN Port, Protocol, and the Tun  port 1194  proto udp  dev tun    #listen  local *my DNS*    # OpenVPN Server Certificate - CA, server key and certificate  ca /etc/openvpn/server/ca.crt  cert /etc/openvpn/server/*cert*.crt  key /etc/openvpn/server/*key*.key    #DH and CRL key  dh /etc/openvpn/server/dh.pem  crl-verify /etc/openvpn/server/crl.pem    # Network Configuration - Internal network  # Redirect all Connection through OpenVPN Server  server 10.8.0.0 255.255.255.0  push "redirect-gateway def1"    # Using the DNS from https://dns.watch  push "dhcp-option DNS 84.200.69.80"  push "dhcp-option DNS 84.200.70.40"    #Enable multiple clients to connect with the same certificate key  duplicate-cn    # TLS Security  cipher AES-256-CBC  tls-version-min 1.0  tls-cipher TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA256:TLS-DHE-RSA-WITH-AES-128-GCM-SHA256:TLS-DHE-RSA-WITH-AES-128-CBC-SHA256  auth SHA512  auth-nocache    # Other Configuration  keepalive 20 60  persist-key  persist-tun  compress lz4  daemon  user nobody  group nobody      # OpenVPN Log  log-append /var/log/openvpn.log  verb 4  

my client.ovpn on my Windows client:

client  dev tun  proto udp    remote *my DNS* 1194    ca "c:\\Users\\*Username*\\Documents\\OpenVPNFiles\\Client1\\client\\ca.crt"  cert "c:\\Users\\*Username*\\Documents\\OpenVPNFiles\\Client1\\client\\*cert*.crt"  key "c:\\Users\\*Username*\\Documents\\OpenVPNFiles\\Client1\\client\\*key*.key"    cipher AES-256-CBC  auth SHA512  auth-nocache  tls-version-min 1.0  tls-cipher TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA256:TLS-DHE-RSA-WITH-AES-128-GCM-SHA256:TLS-DHE-RSA-WITH-AES-128-CBC-SHA256  remote-cert-tls server    resolv-retry infinite  compress lz4  nobind  persist-key  persist-tun  mute-replay-warnings  verb 4  

Any help is very appreciated.

Kerberos with Apache not working

Posted: 23 Jan 2022 11:05 PM PST

everyone,

I'm currently trying to configure Kerberos on our Apache and unfortunately I can't get any further. The website (Typo3) on the apache is accessed internally and externally with sub.domain.com The local domain is intern.local

I created the keytab file like this:

ktpass -princ HTTP/sub.domain.com@INTERN.LOCAL -mapuser kerb@intern.local -pass P@55w0rd -crypto ALL -ptype KRB5_NT_PRINCIPAL -out C:\temp\kerbkey.keytab  

The krb5.conf file looks like this:

[libdefaults]          default_realm = INTERN.LOCAL            kdc_timesync = 1          ccache_type = 4          forwardable = true          proxiable = true    [realms]          INTERN.LOCAL = {                  kdc = dc01.intern.local                  admin_server = dc01.intern.local                  default_domain = intern.local          }    [domain_realm]          .sub.domain.com = INTERN.LOCAL          sub.domain.com = INTERN.LOCAL          intern.local = INTERN.LOCAL          .intern.local = INTERN.LOCAL  

the Apache vhost looks like this:

<VirtualHost *:443>      ServerName sub.domain.com      ServerAdmin it-administration@domain.com      DocumentRoot /var/www/page        <Directory /var/www/page>       AllowOverride All      </Directory>        SSLEngine on      SSLCertificateFile /etc/apache2/ssl/wildcart-zert.crt      SSLCertificateKeyFile /etc/apache2/ssl/wildcart-key.key    <IfModule !mod_auth_gssapi.c>      LoadModule auth_kerb_module /usr/lib/apache2/modules/mod_auth_gssapi.so  </IfModule>    LimitRequestFieldSize 32768    <Location "/">   AuthName kerb@INTERN.LOCAL   AuthType GSSAPI   GssapiBasicAuth On   GssapiCredStore keytab:/etc/apache2/krb5/kerbkey.keytab   Require valid-user    </Location>        ErrorLog ${APACHE_LOG_DIR}/page-ssl_error.log      CustomLog ${APACHE_LOG_DIR}/page-ssl_access.log combined  </VirtualHost>  

The problem now is, if I activate the vhost config like this, then when I call up the page https://sub.domain.com, I always get a browser popup to enter the username and password. And no matter what I type here, I can't get to the web page and just get the error:

Unauthorized  This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.    Apache/2.4.41 (Ubuntu) Server at sub.domain.com Port 443  

apache error log show this entries:

[auth_gssapi:error] [pid 1632875] [client x.x.x.x:65394] GSS ERROR In Negotiate Auth: gss_accept_sec_context() failed: [An unsupported mechanism was requested (Unknown error)]  

freeradius and openldap : vlan attribution working with radtest but not with wpa_supplicant

Posted: 24 Jan 2022 12:18 AM PST

Both of my services freeradius and openldap are on the same server. The schema Freeradius is loaded into openldap.

I configured the radiusProfileDN of a user to link to a group. In this group, I have radiusReplyAttribute set to give the informations of the vlan.

  • When I use the command radtest in local (or from the remote and already authenticated client), I recieve an Access-Accept packet (radius protocol) containing the information for the vlan. A wireshark capture show the information for the vlan is in the packet.
  LDAP + Radius                      LDAP + Radius ----- Switch ----- Client      <--------                              <-----------------------------      -------->              or              ----------------------------->     *vlan info*                                       *vlan info*  
  • When I use the tool wpa_supplicant (peap-gtc protocol), I authenticate with success but the client port is not added to the vlan group. A wireshark capture show the Access-Accept packet exchanged between the switch and the Radius server dont have the vlan information in it.
LDAP + Radius ----- Switch ----- Client    <------------------    <----------    ------------------>    ---------->      *no vlan info*      wpa_supplicant  

From the log of openldap, the same steps are made for the authentication with radtest or wpa_supplicant :

  1. read access allowed for radiusReplyAttribute on 'mygroup'
  2. result was in cache (radiusReplyAttribute)
  3. send_search_entry exit
  4. send_ldap_result & send_ldap_response

In the ldap server, I tried putting the vlan information directly in the user, or in the already made "variable" for the vlan info but I get the same result.

Do you know where my problem come from ? It seems related to wpa_supplicant using a different protocol than the radtest command and freeradius (maybe I miss a line in the configuration) ?

How to configure the AT&T (Arris) BGW-210 router for IP Passthrough using static IP(s) and pointing to UniFi Dream Machine Pro?

Posted: 23 Jan 2022 11:01 PM PST

We are setting up AT&T fiber internet with 5 usable static IPs and the Ubiquity UniFi Dream Machine Pro (UDM-Pro). I would like to configure the BGW-210 to act as a bridge to the UDM-Pro.

I found this article on how to configure the BGW-210 in IP Passthrough mode (similar to bridge), but some of the details are a bit unclear and I need to adjust this setup process to use one or more of my static IP addresses on the UDM-Pro.

In one paragraph, the article said DHCP is not needed for Passthrough mode:

The DHCP Server option can be turned off if you're doing IP Passthrough, but you must leave it on if you are doing Default Server...

But later on it said that you are still using DHCP:

It is worth mentioning that this is still a DHCP address that your internal device is getting...

Which leaves some confusion on whether or not DHCP server should be configured or disabled.

Here are the things I'm fairly certain of:

  1. Set the "Public LAN Subnet" different than the UDM-Pro LAN subnet.
  2. Setup the IP addresses provided by AT&T under the "Public Subnet" section. I did this and we can connect to the Internet.
  3. I need to enable "Allocation Mode" to Passthrough.
  4. I need to set the "Passthrough Mode" to DHCPS-fixed.
  5. I need to enter the MAC address of the UDM-Pro in "Passthrough Fixed Mac Address".
  6. I need to setup the UDM-Pro to get its WAN address from a DHCP server.

What I'm unclear about is:

  1. Under "Public Subnet" section, do I leave "Public Subnet Mode" On and "Allow Inbound Traffic" Off?
  2. Do I leave "DHCP Server Enable" On and what IP address ranges should be there? The author of the post seems to mix the Default Server instructions with the Passthrough instructions.
  3. After putting the BGW-210 in Passthrough mode, do I still need to turn off packet filtering and firewall features or does Passthrough mode bypass these automatically?

Again, the goal is to "bridge" the AT&T router and have the UDM-Pro manage all routing and security.

Thank you.

How do I hibernate an Ubuntu server when network is not in use for three hours?

Posted: 23 Jan 2022 10:11 PM PST

I only really use the server a few hours a day a few days a week.

It is a backup server, it requests the backup data from the clients.

That part is taken care of, it wakes via a scheduled magic packet and does its thing. That is all good. I can wake it up to use it off schedule, that is also fine.

How do I just have it know that the network hasn't been used in a while and to put itself to sleep? The network traffic I would want to have record of are SSH, SFTP, rsync, and updates from Canonical. All other traffic is just chatter that I don't care about.

I'd like to maybe put the following pseudo code in as a cron script... that checks every 15 minutes or so. I am not worried about adding the cron functionality, I feel confident there.

if [ lastSignificantNetworkActivity > 3h ] { hibernate }  

I may have an X->Y problem. I just want to put my server in a low power save to disk state for the usual 18 hours it would otherwise be doing nothing. I think network activity was a good metric to test against. I am open to more developed & robust solutions or inherent server properties to check against that exist.

(I am not sure if the daily power cycling would be worse than the constant wear and tear from ZFS running the data integrity checks all day long... just not sure.)

Remote incremental backups with rsync?

Posted: 23 Jan 2022 10:02 PM PST

I'm trying to set up rsync to send incremental backups to a remote server. The first backup would go to a "backup" folder, then the next backup would send only the changes to a "backup.1" folder, and so on.

I managed to do this locally with the following command, which seemed to be working as described, creating a backup.1 folder on the second sync :

rsync -zaP folder_to_backup /backup    

I then set up a ssh key pair and managed to get rsync working remotely, so I'm now using :

rsync -zaP folder_to_backup myuser@myserver:/home/myuser/backup  

The sync does work and the files appear on the remove server. But once I run it a second time, the new files simply get added to the existing "backup" folder, rather than creating a backup.1 folder. I also tried other commands with the -b argument, such as :

rsync -zaPb folder_to_backup myuser@myserver:/home/myuser/backup  rsync -aPb --backup-dir=`date +%s` folder_to_backup myuser@myserver:/home/myuser/backup  

But it acts the same in all case. In the last case, the sync still goes to the "backup" folder, the backup-dir argument seems to be ignored completely.

What am I doing wrong?

Edit : Reading the comments, it's possible I got confused somehow when I say "which seemed to be working as described, creating a backup.1 folder on the second sync". That's how I remember it but apparently it's not a feature of rsync?
Instead, I now installed rsnapshot, which is great for incremental backups.

Icinga2 : Installation Error date.timezone is not defined

Posted: 23 Jan 2022 09:01 PM PST

Installation of Icinga2 Monitoring Tools on Ubuntu 14.04

I am not able to complete my installation. I am getting the Error "The PHP config `date.timezone' is not defined."

I did the changes on /etc/php5/apache2/php.ini

date.timezone = Asia/Kolkata

After the changes i restart my webServer apache also. service apache2 restart

Still i am facing the same issue when i am launching enter image description herehttp://localhost/icingaweb2/setup

Generate graph in Grafana from API

Posted: 23 Jan 2022 09:01 PM PST

I'm looking for a way to generate an arbitrary graph from the Grafana API, ideally by just feeding it a query. After looking in the doc I don't see anything to do it directly, so the only way I can see would be to :

  • Generate a dashboard json with just the graph I want
  • Create the dashboard through the API by sending that json
  • Export that graph as jpg
  • Delete this darshboard

That seems a bit silly, isn't there a way to just generate a graph from a specific query directly ? The goal here is to add a graph in our monitoring alerts, that way if we get a high load alert on a server for example I could generate a query to get that server's load graph, and include that in the alert e-mail. Nothing life changing, but it would be a nice feature to have I think.

How to add a datasource in wildfly swarm with .war packaging?

Posted: 24 Jan 2022 12:07 AM PST

I'm trying to run my web application using Wildfly Swarm and .war packaging.

How should i add my jdbc driver and data source definition?

rsyslog not starting up: not found

Posted: 23 Jan 2022 08:30 PM PST

When starting rsyslog I get the following:

/etc/init.d/rsyslog: 1: /etc/default/rsyslog: imudp: not found  /etc/init.d/rsyslog: 2: /etc/default/rsyslog: 127.0.0.1: not found  /etc/init.d/rsyslog: 3: /etc/default/rsyslog: 514: not found  

My /etc/default/rsyslog file:

$ModLoad imudp  $UDPServerAddress 127.0.0.1  $UDPServerRun 514  

tail /dev/stderr from errors within supervisord php cli script

Posted: 23 Jan 2022 08:04 PM PST

To make this simple, I want to know if it's possible to access the STDERR like a channel. I don't want the data to log to a file and then tail the file because the amount of information that I want to send in would fill the system. I only care about the data when I want to tap in to what would be sent to STDERR.

I thought that it was possible to tail /dev/stderr in some way, but that doesn't work. The reason I can't use STDOUT is that the script is running in supervisor and anything sent to STDOUT is logged into the program.log file in supervisor. And I'm already outputting some information for that.

Any ideas or thoughts on how to accomplish this would be REALLY helpful!!

Thanks

Which is more efficient with rsync: rsh/ssh or modules?

Posted: 24 Jan 2022 12:07 AM PST

Leaving out security concerns, which is the most bandwidth-efficient use of rsync for long-distance WAN transfers: rsh/ssh or modules?

I understand that modules assume no encryption by default, but everything I've read suggests that the CPU overhead for rsh/ssh is negligible on modern systems (e.g. multi Xenons), and the pipe won't back up with <1Gbs network speeds. I know that there is additional overhead with the rsh having to originate the remote shell and execute rsync, but given the amount of data, this seems negligible.

It would be a heck of a lot easier to just open up rsh and use rsync this way for this implementation, rather than set up a module for every server, but if the difference is measurable, I will of course do it with modules. Anyone have experience/opinions?

Linux IPSec between Amazon EC2 instances on same subnet

Posted: 23 Jan 2022 11:01 PM PST

I have a requirement to secure all communications between our Linux instances on Amazon EC2 - we need to treat the EC2 network as compromised and therefore want to protect the data that's being transferred within the EC2 subnet(s). The instances to secure will all be on the same subnet. I'm a Windows bod with limited Linux abilities, so am familiar with IPSec terminology and can find my way around Linux, but haven't got a clue when it comes to setting up Linux IPSec environments.

Can anyone throw me some information for setting up IPSec between all (Linux) hosts on a subnet please? I can only find information that pertains to site-to-site connections, or host-to-host connections and nothing that covers all Lan communication. We're currently using OpenSwan for site-to-site VPNs if that helps.

Updated with more information

This is an example config (very basic to connect between two hosts using a pre-shared key):

    conn test      type=tunnel      auto=start      authby=secret      left=10.0.2.4      right=10.0.2.5      pfs=yes  

If I now want to secure all traffic between 4 hosts for instance (or 8,10,100 etc), is there a way to make the left and right parameters more generic, so they mean 'encrypt traffic between all hosts' rather than having to explicitly specify a left and right host.

My goal would be to achieve a generic configuration that has no hardcoded host IP's (subnets would be OK), so that we could include the configuration in our EC2 image.

Thanks Mick

sshd[4344]: error: ssh_selinux_setup_pty: security_compute_relabel: Invalid argument?

Posted: 23 Jan 2022 09:32 PM PST

  • OpenSSH_5.8p1, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
  • selinux-policy-2.4.6-338.el5
  • pam-0.99.6.2-12.el5

SELinux is running in permissive mode:

# sestatus   SELinux status:                 enabled  SELinuxfs mount:                /selinux  Current mode:                   permissive  Mode from config file:          disabled  Policy version:                 21  Policy from config file:        targeted  

Whenever I login via ssh, /var/log/secure complain that:

sshd[4957]: Accepted publickey for quanta from 192.168.3.40 port 55822 ssh2  sshd[4957]: pam_unix(sshd:session): session opened for user quanta by (uid=0)  sshd[4957]: error: ssh_selinux_setup_pty: security_compute_relabel: Invalid argument  

Google point me to this thread on the Fedora forum:

Possibly the easiest way (though longest) is to create the file /.autorelabel and reboot.

but I wonder that is there any other way to get rid of this without rebooting?

The security context of quanta:

$ id -Z  system_u:system_r:unconfined_t:SystemLow-SystemHigh  

The mapping between Linux user and user_u:

$ sudo /usr/sbin/semanage login -l    Login Name                SELinux User              MLS/MCS Range                __default__               user_u                    s0                         root                      root                      SystemLow-SystemHigh     

SELinux users:

$ sudo /usr/sbin/semanage user -l                    Labeling   MLS/       MLS/                            SELinux User    Prefix     MCS Level  MCS Range                      SELinux Roles    root            user       s0         SystemLow-SystemHigh           system_r sysadm_r user_r  system_u        user       s0         SystemLow-SystemHigh           system_r  user_u          user       s0         SystemLow-SystemHigh           system_r sysadm_r user_r  

The security context that sshd is running as:

$ ps -eZ | grep sshd  system_u:system_r:unconfined_t:SystemLow-SystemHigh 9509 ? 00:02:11 sshd  system_u:system_r:unconfined_t:SystemLow-SystemHigh 18319 ? 00:00:00 sshd  system_u:system_r:unconfined_t:SystemLow-SystemHigh 18321 ? 00:00:00 sshd  

PAM configuration for sshd:

#%PAM-1.0  auth       include      system-auth  account    required     pam_nologin.so  #account        required    pam_access.so  account    include      system-auth  password   include      system-auth  session    optional     pam_keyinit.so force revoke  session    include      system-auth  session    required     pam_loginuid.so  

What I already tried and didn't help:

# yum reinstall selinux-policy-targeted  # restorecon -R -v /etc/selinux/  # restorecon -R -v /home/  

/var/log/audit/audit.log:

type=USER_AUTH msg=audit(1362372405.835:28361028): user pid=14332 uid=0 auid=503 subj=system_u:system_r:unconfined_t:s0-s0:c0.c1023 msg='op=pubkey_auth rport=50939 acct="quanta" exe="/usr/sbin/sshd" (hostname=?, addr=192.168.3.40, terminal=? res=success)'


UPDATE Mon Mar 4 21:20:49 ICT 2013

Reply to Michael Hampton:

Keep in mind that the reason for the reboot is that changing the security contexts affects running services, so at a minimum you should restart the affected services (i.e. restart sshd).

root     10244     1  0 17:18 ?        00:00:00 /usr/sbin/sshd  

/var/log/secure

Mar 4 17:18:48 hostname sshd[10308]: error: ssh_selinux_setup_pty: security_compute_relabel: Invalid argument


UPDATE Tue Mar 5 21:54:00 ICT 2013

Can you provide the output of ls -lZ /dev/pts /dev/ptmx?

# ls -lZ /dev/pts/ /dev/ptmx   crw-rw-rw-. 1 root tty  system_u:object_r:ptmx_t   5, 2 Mar  5 21:54 /dev/ptmx    /dev/pts/:  total 0  crw--w----. 1 quanta tty system_u:object_r:devpts_t 136, 0 Mar  5 21:54 0  crw--w----. 1 dieppv tty system_u:object_r:devpts_t 136, 1 Feb 26 17:08 1  crw--w----. 1 dieppv tty system_u:object_r:devpts_t 136, 2 Feb 25 17:53 2  

UPDATE Wed Mar 6 10:57:11 ICT 2013

I have compiled the openssh-5.8p1, then copied the configuration files (/etc/ssh/ and /etc/pam.d/sshd), and started on a different port.

What surprised me is /var/log/secure doesn't complain when I try to login:

sshd[5061]: Server listening on 192.168.6.142 port 2317.  sshd[5139]: Accepted publickey for quanta from 192.168.3.40 port 54384 ssh2  sshd[5139]: pam_unix(sshd:session): session opened for user quanta by (uid=0)  

The security context of the new /usr/local/sbin/sshd daemon is the same as the old (/usr/sbin/sshd):

ps -efZ | grep /usr/local/sbin/sshd

system_u:system_r:unconfined_t:SystemLow-SystemHigh root 5061 1  0 10:56 ? 00:00:00 /usr/local/sbin/sshd  system_u:system_r:unconfined_t:SystemLow-SystemHigh root 7850 3104  0 11:06 pts/3 00:00:00 grep /usr/local/sbin/sshd  

Do you have any ideas?


UPDATE Wed Mar 6 15:36:39 ICT 2013

Make sure you compile it with --with-selinux, its not done by default.

OpenSSH has been configured with the following options:                       User binaries: /usr/local//bin                     System binaries: /usr/local//sbin                 Configuration files: /etc/openssh                     Askpass program: /usr/local//libexec/ssh-askpass                        Manual pages: /usr/local//share/man/manX                            PID file: /var/run    Privilege separation chroot path: /var/empty              sshd default user PATH: /usr/bin:/bin:/usr/sbin:/sbin:/usr/local//bin                      Manpage format: doc                         PAM support: yes                     OSF SIA support: no                   KerberosV support: no                     SELinux support: yes                   Smartcard support:                        S/KEY support: no                TCP Wrappers support: no                MD5 password support: yes                     libedit support: no    Solaris process contract support: no             Solaris project support: no         IP address in $DISPLAY hack: no             Translate v4 in v6 hack: yes                    BSD Auth support: no                Random number source: OpenSSL internal ONLY                  Host: i686-pc-linux-gnu            Compiler: gcc      Compiler flags: -g -O2 -Wall -Wpointer-arith -Wuninitialized -Wsign-compare -Wformat-security -Wno-pointer-sign -fno-strict-aliasing -fno-builtin-memset -fstack-protector-all -std=gnu99   Preprocessor flags:         Linker flags:  -fstack-protector-all           Libraries: -lresolv -lcrypto -ldl -lutil -lz -lnsl  -lcrypt           +for sshd:  -lpam -lselinux            +for ssh:  -lselinux  

/var/log/secure when login to port 22 (/usr/sbin/sshd):

sshd[27339]: Accepted publickey for quanta from 192.168.3.40 port 50560 ssh2   sshd[27339]: pam_unix(sshd:session): session opened for user quanta by (uid=0)   sshd[27339]: error: ssh_selinux_setup_pty: security_compute_relabel: Invalid argument  sshd[28025]: Received disconnect from 192.168.3.40: 11: disconnected by user  sshd[28022]: pam_unix(sshd:session): session closed for user quanta   

/var/log/secure when login to port 2317 (/usr/local/sbin/sshd):

sshd[27705]: Accepted publickey for quanta from 192.168.3.40 port 34175 ssh2   sshd[27705]: pam_unix(sshd:session): session opened for user quanta by (uid=0)  sshd[27707]: Received disconnect from 192.168.3.40: 11: disconnected by user   sshd[27705]: pam_unix(sshd:session): session closed for user quanta   

and here're the log to prove that it enters the ssh_selinux_setup_pty:

sshd[4023]: debug1: SELinux support enabled  sshd[4023]: debug3: ssh_selinux_setup_pty: setting TTY context on /dev/pts/3  sshd[4023]: debug3: ssh_selinux_setup_pty: User context is user_u:system_r:unconfined_t, target tty is   system_u:object_r:devpts_t  sshd[4023]: debug3: ssh_selinux_setup_pty: done  

Is there a way to make encryption default per contact in Thunderbird?

Posted: 23 Jan 2022 11:32 PM PST

I administer several computers that have Thunderbird installed. I know Thunderbird has an option to require all email to be encrypted. However, I would like a way to allow unencrypted email normally, but require encrypted email to certain contacts.

For example, if I email ceo@mycompany.com, Thunderbird should require encryption, but if I email JohnDoe@othercompany.com Thunderbird would allow unencrypted emails.

Are there any settings or Add-ons that would give this functionality?

Thanks!

Internal Server Error (Timeou waiting for output from CGI)

Posted: 23 Jan 2022 10:02 PM PST

All,

I'll admit right away that I'm not very familiar with the server side of things, just FYI, so I'm not sure how to debug this error. I moved my website from a Windows platform to an Linux platform and a new VPS. I'm getting an "Internal Sever Error" every once in awhile and I'm not sure why. When I look at the server logs I'm seeing this:

"Timeout waiting for output from CGI script /var/www/cgi-bin/cgi_wrapper/cgi_wrapper"

I'm running WordPress for my sites. Can someone tell me how to debug this error? Sometimes the site works fine which is really strange. Any help would be great.

Forwarding linux terminal from serial port to TCP with socat

Posted: 23 Jan 2022 11:02 PM PST

I'm working on embedded ARM platform, Slackware. I'm using G24 Java modem which is configured to forward data between ports /dev/ttyS1 and /dev/ttyACM0, so anything that goes onto any of these ports is then visible on the other. I want to set terminal on one of these ports, /dev/ttyS1 and forward the other port, /dev/ACM0 to the TCP port, so it can be accessed from other machine via LAN.

First of all, I configured /etc/inittab:

s2:12345:respawn:/sbin/agetty -L ttyS1 115200 vt100  

Then I'm trying to use socat with following command:

socat -d -d -d TCP-l:2020,reuseaddr,fork /dev/ttyACM0,raw,nonblock,waitlock="/var/run/ttyACM0.lock",b115200,echo=1,icanon=1,crnl

Then I'm trying to connect with telnet 192.168.1.222 2020 from other machine, the result is not quite good, I see from the client side that terminal is asking for login, but then there is an immediate answer which I haven't typed in: ^M^M^M... etc., the terminal is answering that the login is incorrect and then again and again the same thing.

I know that ^M means carriage return sign, but I'm not quite sure how to fix that problem. I have tried different configurations of socat, but none of them worked correctly.

Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI

Posted: 23 Jan 2022 08:04 PM PST

I have a hosting account with DreamHost and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this:

# Define the FastCGI Mono launcher as an Apache handler and let  # it manage this web-application (its files and subdirectories)  SetHandler monoWrapper  Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual  

My mono.fcgi is set up as such:

#!/bin/sh  #umask 0077  exec >>/home/arienh4/tmp/mono-fcgi.log  exec 2>>/home/arienh4/tmp/mono-fcgi.err    echo $(date +"[%F %T]") Starting fastcgi-mono-server2    cd /  chmod 0700 /home/arienh4/tmp/mono-fcgi.sock  echo $$>/home/arienh4/tmp/mono-fcgi.pid  # stdin is the socket handle  export PATH="/home/arienh4/mono/bin:$PATH"  export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH"  export TMP="/home/arienh4/tmp"  export MONO_SHARED_DIR="/home/arienh4/tmp"  exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \  /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \  /applications=/:/home/arienh4/<domain>  

I took this from the Mono site for CGI. I'm not sure if I'm doing it correctly though. This code is resulting in this error:

Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.  

I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

No comments:

Post a Comment