Thursday, April 28, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Nginx chrome ERR_SSL_PROTOCOL_ERROR when not trying to use ssl

Posted: 28 Apr 2022 04:19 AM PDT

I've set-up Nginx both default and a config in sites-enabled and yet I get ERR_SSL_PROTOCOL_ERROR when trying to connect. I never configured anything to use ssl and trying to connect with http results in a redirect to https. Firefox shows a SSL_ERROR_RX_RECORD_TOO_LONG error.

Maybe related is that I'm trying to set up a Strapi (NodeJS) server which gives me the same error when launched on the server. I thought that Nginx would maybe fix it but even in it's basic configuration I get the same error.

If anyone could that with this that would be greatly appreciated, feel free to ask for more information.

Generating plain-text report in OpenSCAP

Posted: 28 Apr 2022 03:59 AM PDT

I have set up OpenSCAP for compliance testing. Right now I am generating xml and html reports.

oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_custom --results-arf results.xml --report report.html ssg-centos7-custom.xml

I really need to generate text reports. The documentation says:

3.3. Generating Reports and Guides Another useful features of oscap is the ability to generate SCAP content in a human- readable format. It allows you to transform an XML file into HTML or plain-text format.

But then only gives examples of generating html reports. Does anybody know how to generate a text report?

Trying to deploy an electron app with Amazon ElasticBeanstalk

Posted: 28 Apr 2022 02:49 AM PDT

I'm deploying an electron app whose core is a dll so I'm using windows server as platform but I got a red enviroment and this error.

[Instance: i-0586c920d7538ec6b ConfigSet: Infra-EmbeddedPreBuild, Hook-PostInit, Hook-PreAppDeploy, Infra-EmbeddedPostBuild, Hook-EnactAppDeploy, Hook-PostAppDeploy, Infra-WriteVersionOnStartup] Command failed on instance. Return code: 1 Output: null.

I don't know where I can find more information of this error or if there is some logs that I can't find.

If it helps, I uploaded the files in a zip, including the dll. I don't know if it's the right way to do.

nginx proxy ignore port

Posted: 28 Apr 2022 02:05 AM PDT

Is nginx installed in docker, but I can't get it to listen on the port of the proxy_pass directive

When I try to connect on 192.168.10.214 it give 502 bad gateway error

docker container

61211c00149f   migasfree/server:4.19   "/bin/bash /docker-e…"   17 hours ago   Up 3 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp           192.168.10.214-server  

Nginx.conf

user www-data;    worker_processes auto;    pid /run/nginx.pid;    include /etc/nginx/modules-enabled/*.conf;    events {          worker_connections 768;          # multi_accept on;  }    http {          sendfile on;          tcp_nopush on;          tcp_nodelay on;          keepalive_timeout 65;          types_hash_max_size 2048;                   include /etc/nginx/mime.types;          default_type application/octet-stream;            ##          # SSL Settings          ##            ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE          ssl_prefer_server_ciphers on;            ##          # Logging Settings          ##            access_log /var/log/nginx/access.log;          error_log /var/log/nginx/error.log;            ##          # Gzip Settings          ##            gzip on;            ##          # Virtual Host Configs          ##            include /etc/nginx/conf.d/*.conf;          include /etc/nginx/sites-enabled/*;  }  

site.conf

server {      listen 80;      server_name 192.168.10.214  localhost 127.0.0.1;      client_max_body_size 1024M;          # STATIC      # ======      location /static {          alias /var/migasfree/static;      }          # SOURCES      # =======      #  PACKAGES deb      location ~* /src/?(.*)deb$ {          alias /var/migasfree/repo/$1deb;          error_page 404 = @backend;      }      #  PACKAGES rpm      location ~* /src/?(.*)rpm$ {          alias /var/migasfree/repo/$1rpm;          error_page 404 = @backend;      }          # DEPLOYMENTS      # ===========      location /public {          alias /var/migasfree/repo;          autoindex on;      }      location /public/errors/ {          deny all;          return 404;      }          # REPO (compatibility)      # ====================      location /repo {          alias /var/migasfree/repo;          autoindex on;      }      location /repo/errors/ {          deny all;          return 404;      }          # BACKEND      # =======      location / {          try_files $uri @backend;      }      location @backend {          add_header 'Access-Control-Allow-Origin' "";          add_header 'Access-Control-Allow-Credentials' 'true';          add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';          add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With';          proxy_pass http://127.0.0.1:8080;          proxy_set_header Host $http_host;          proxy_set_header X-Forwarded-Host $server_name;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header REMOTE_ADDR $remote_addr;          proxy_connect_timeout 600;          proxy_send_timeout 600;          proxy_read_timeout 600;      }  }  

netstat -nta

Proto Recv-Q Send-Q Local Address           Foreign Address         State  tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN  

Thanks in advance

HAProxy Timeouts and Streaming Backend - Hangs Forever

Posted: 28 Apr 2022 01:53 AM PDT

I have a backend behind haproxy that streams a file to the client using a CGI script. I'm trying to set a timeout for my backend servers. All works fine, except for this timeout.

When I set the "timeout server" option in the haproxy config to 1 second (just to test - the backend takes 2-3 seconds at a minimum), and make a request, after 1 second it puts an entry into the log file for the backend request (before the request is completed), and from there, it hangs forever. Client never receives a response, no error, nothing.

I've looked through all of the docs for haproxy config and I can not find any timeout option that makes sense here. But this seems like very strange default behavior, to hang forever if the timeout is reached without a complete response (I've since realized setting is supposed to be a timeout just for the headers - but I can't seem to find another timeout setting for the backend).

Does haproxy just not support any kind of streaming HTTP response? I understand that we can't send a header - but can I set it to just send whatever data it has so that the client picks up that it is malformed?

Here is a log line for one of these requests that "timed out":

Apr 28 09:32:51 print haproxy[29896]: 127.0.0.1:37662 [28/Apr/2022:09:32:47.317] http_front http_back/SERVERNAME 2/3007/23/46/4080 200 150 - - sD-- 1/1/0/0/+3 0/0 "POST / HTTP/1.1"

Full haproxy config:

global      log /dev/log    local0      log /dev/log    local1 notice      chroot /var/lib/haproxy      stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners      stats timeout 30s      user haproxy      group haproxy      daemon        # Default SSL material locations      ca-base /etc/ssl/certs      crt-base /etc/ssl/private        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate      ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384      ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256      ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets    defaults      log global      mode http      option httplog      option dontlognull      option redispatch      retries 3      timeout queue 15m # How long a client should wait for a server to become available.      #maxconn 10000      timeout client 15m      timeout connect 1s      timeout server 10s      rate-limit sessions 10000      errorfile 400 /etc/haproxy/errors/400.http      errorfile 403 /etc/haproxy/errors/403.http      errorfile 408 /etc/haproxy/errors/408.http      errorfile 500 /etc/haproxy/errors/500.http      errorfile 502 /etc/haproxy/errors/502.http      errorfile 503 /etc/haproxy/errors/503.http      errorfile 504 /etc/haproxy/errors/504.http    frontend http_front      bind *:88      stats uri /haproxy?stats      default_backend http_back      #maxconn 10000    backend http_back      balance leastconn      retry-on all-retryable-errors      default-server maxconn 1 observe layer7 error-limit 1 on-error mark-down      server NAME IP:PORT    

Why is a windows domain user required and is there an alternative?

Posted: 28 Apr 2022 03:20 AM PDT

I have a script that executes JAVA program. The script runs fine under my own user.

However, the script fails to run when using a scheduler service, which runs on the same (EC2) machine, under a system account with administrator privileges (SchedulerUser).

My supplier tells me I need to run the scheduler service using a domain user.

This would require me to set up an Active Directory, which I like to prevent if possible.

Why is a windows domain user required for this, and is there an alternative?

Restoring Win server 2008 from full back up?

Posted: 28 Apr 2022 02:58 AM PDT

I'm having trouble of restoring win server 2008, can anyone help me.

First of all, i got the Power Edge 2900 running raid 1 with 3 disk. Power outage happend, and 2 disks are gone -slot0 and slot1 (both flash yellow led), and controller shows lost configuration, no VD configuration found.

Then I changed the survive disk to slot0, plugin into slot1 a new disk, clear foreign, create new raid 1 (no init) and let the background init running. After 12 hours, all shows online, no error, but server stop at "strike f1 to continue, f2 to...", it wont boot into windows.

I got the full server backup image, so i removed both disk, install 2 new disk, create new raid, fast init, trying to do a full system restore using a boot USB created by rufus (newest version). When it came to the selection "choose system image to restore", i got the msg: "To restore this computer Windows needs to format the drive that the Windows Recovery environment is currently running on"

What i used to restore: usb boot, an external hdd with the windows image.

What i tried:

  • Using diskpart to format the drive, same error

  • Using format utilizi from the installation itself, same

  • Install new win 2008 and try restore from the boot USB again, same

  • Creating new System recovery USB by Windows 7 USB/DVD download tool but got the error: not compatible USB to select.

I'm almost as dead end, any advice would be very appreciated, thank you.

I just need a way to get the pc on again, it is a DC, either by system restore, or by rebuilding the raid, still have 1 disk working the in the old raid (slot2 as described) intact.

does anyone have a solution for me to access a subfolder of my application behind nginx proxy?

Posted: 28 Apr 2022 12:52 AM PDT

I have two servers with different IP addresses:

  • Tomcat serving my webapp https:// app1.domain.com (centos6)
  • Nginx acting as waf via https:// nginx.domain.com (centos7)

Nginx is running on port 443, and I'm using it to reverse proxy my webapp this way:

   location /app1/ {     rewrite ^/app1(.*) /$1 break;      proxy_pass https://app1.domain.com/;  }  

This way, I normally access my webapp via nginx through https:// nginx.domain.com/app1/.

Secondly, in the ROOT folder of my webapp, I installed the birt-viewer application in the ROOT/birt-viewer folder. I normally access the birt-viewer application when I use the link https:// app1.domain.com/birt-viewer.

However, I don't normally access the birt application when I use the link https:// nginx.domain.com/app1/birt-viewer. when I copy for example the link /birt-viewer/Task.jsp?__report=Recare.rpgn&sample=my+parameter&__sessionId=2026 and I paste it after the link https:// nginx.domain.com/app1 to obtain the final link https:// nginx.domain.com/app1/birt-viewer/Task.jsp?__report=Recare.rpgn&sample=my+parameter&__sessionId=2026, I access the birt-viewer application but I lose settings such as cookies and sessions.

You understand that to access my webapp via nginx I have to do it manually; the disadvantage is the loss of cookies, sessions and other parameters. Yet access should be done automatically without problems.

This is a part of my nginx config:

server_name nginx.domain.com;      location /app1/ {      rewrite ^/app1(.*) /$1 break;       proxy_pass https:// app1.domain.com/;  }    location /app1/folder1/ {  rewrite ^/app1/folder1(.*) /$1 break;   proxy_pass https:// app1.domain.com/folder1/;  }  

I also realize that once my webapp behind nginx, some urls are not up to date and still keep the old access.

So my concern is to access the birt-viewer application automatically (not manually) through nginx via https:// nginx.domain.com/app1/birt-viewer

Does anyone have a solution for me?

How to filter and deliver a message in Procmail

Posted: 28 Apr 2022 01:34 AM PDT

I rewrite the subject line for certain incoming mails depending on TO: field:

:0fhw  * ! ^TO_user@domain\.com\>  * ^TO_[^<>@ ]+@domain\.com\>  * ^Subject:\/.+  | /usr/local/bin/formail -I"Subject: [SPAM]$MATCH"  

The above code (from my earlier question procmail rewrite subject line if email recipient user fails match test) works perfectly - if I get mail from something OTHER THAN user@domain.com, the subject line is rewritten as [SPAM] (original subject)

But I would like to do multiple conditionals like this - the working block, above, will be the final one, but before that, I'd like to rewrite the subject line if the TO matches a different address.

So I added this block just above it:

:0fh  * ^TO_special@domain\.com  * ^Subject:\/.+  | /usr/local/bin/formail -I"Subject: [SPECIAL]$MATCH"  $DEFAULT  

... and this works - emails sent to 'special@domain.com' get their subject line rewritten.

The problem is, Procmail doesn't stop - it moves onto the next block and rewrites it again. So emails sent to special@domain.com get their subject lines rewritten as:

[SPAM] [SPECIAL] Original subject line blah blah

Why is this? Why doesn't the $DEFAULT action at the end of the first block cause Procmail to halt processing for that piece of email?

How can I match the new block and stop processing that piece of mail and just be done with it?

Firewall / Ip rule issues between two hosts via vSwitch

Posted: 27 Apr 2022 11:16 PM PDT

I have two servers in play here, one is a Qemu VM host, the other being a storage box of sorts.

They are hetzner machines, and I have them connected via a vSwitch.

Server1 vSwitch interface:

3: local@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000      link/ether 10:7b:44:b1:5b:7d brd ff:ff:ff:ff:ff:ff      inet 192.168.100.1/24 brd 192.168.100.255 scope global local         valid_lft forever preferred_lft forever  

Server1(VM host) ip route:

default via <redacted-public-ip> dev eth0 proto static metric 100   <redacted-public-ip> dev eth0 proto static scope link metric 100   192.168.10.0/24 dev virbr0 proto kernel scope link src 192.168.10.254 metric 425 <-- virbr0 network  192.168.10.253 via 192.168.100.2 dev local <-- srv02 IP to fit in virbr0 net space  

Server2 vSwitch interface:

3: local@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default qlen 1000      link/ether 08:60:6e:44:d6:2a brd ff:ff:ff:ff:ff:ff      inet 192.168.100.2/24 brd 192.168.100.255 scope global local         valid_lft forever preferred_lft forever      inet 192.168.10.253/24 brd 192.168.10.255 scope global local         valid_lft forever preferred_lft forever  

Server2 ip route:

default via <redacted-public-ip> dev eth0 proto static metric 100   <redacted-public-ip> dev eth0 proto static scope link metric 100   192.168.10.0/24 dev local proto kernel scope link src 192.168.10.253  <-- to access virbr0 via vSwitch  

I have the routes setup correctly, I guess - since everything works a-ok with the firewalld service off.

However, if I turn it on, the issues start.

These are the firewall zones on Server1 (on which when I disable firewalld everything works)

libvirt (active)    target: ACCEPT    icmp-block-inversion: no    interfaces: virbr0    sources:     services: dhcp dhcpv6 dns ssh tftp    ports:     protocols: icmp ipv6-icmp    forward: no    masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:       rule priority="32767" reject      public (active)    target: default    icmp-block-inversion: no    interfaces: eth0 local    sources:     services: cockpit dhcpv6-client ssh    ports:     protocols:     forward: no    masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:   

I have tried turning on masquerade on each of those, both of those at the same time, to no avail. I am "testing" this with a simple ping from Server2, toward one of the VMs on virbr0

Is there anything obvious that I am missing here ?

Thank you all in advance.

RD Gateway and Web Application Proxy and AD FS

Posted: 28 Apr 2022 03:04 AM PDT

I am trying to deploy an RD Gateway in combination with WAP (Web Application Proxy) and AD FS pre-authentication as described here.

For a "proof of concept", I've decided to deploy all RDS roles to one server. Simplified, my environment now looks something like this:

setup overview

Where the server labeled "RDS" contains these roles:

  • RD Web Access
  • RD Gateway
  • RD Licensing
  • RD Connection Broker
  • RD Virtualization Host

On the AD FS Farm, I configured the following Relying Part Trust, which only has the identifier set:

relying party trust

And on the WAP, the published application looks like this:

wap application configuration

Now, Internally everything works. A client in DEVPROD can access RD Web and connect to the vdi resources.
On the WAP, everything works. On any server of the farm, I can access RD Web and connect to the vdi resources.
From outside, I can access RD Web, but connections to the RD Gateway fail with this error message:
error message
On some clients, I also get:

Your computer can't connect to the remote computer because the Remote Desktop Gateway server is temporarily unavailable.

What I've tried/checked

  • All certs used are trusted and rdweb uses the correct one
  • IIS does not have unused bindings
  • Using windows authentication for IIS
  • Setting pre-authentication to required in the custom rdp properties of the collection
  • Setting DefaultTSGateway and radcmserver in the IIS application settings

Where would you start diagnosing this issue?

Stretch Cluster File Server with Storage Replica - Partnership not reversing on failure simulation

Posted: 28 Apr 2022 01:52 AM PDT

First off, sorry if this is the wrong place to post this.

I have a Storage Replica setup running in a MSCS Stretch Cluster configuration, two sets of two VMs with each set running on a separate ESXi host, acting as a File Server. Each set has two vhdx disks attached to them on a virtual bus share SCSI controller. Each replica source and destination disks have identical drive letters as their respective partner across all nodes.

I configured the setup to the letter of Microsoft's guide here:https://docs.microsoft.com/en-us/windows-server/storage/storage-replica/stretch-cluster-replication-using-shared-storage

For the most part, everything is running as it should except two things. When I drain roles from or cut power to the first set of cluster nodes, the disk that is attached to them transfers ownership to one of the nodes in the other set and the disk that is being replicated to stays offline.

From my understanding, what should have happened when the first two nodes were taken out, or more importantly the original source disk is taken offline to simulate failure, is the Storage Replica partnership reversing automatically, the original replica destination disk coming online and acting as the original disk in the File Server role did, granting access to all file shares to clients as if nothing happened.

Instead, to achieve that functionality, I have to take the original source disk offline to simulate failure, then I have to right click and remove the replica partnership, remove it from the file server role, manually add the destination disk to the file server role and lastly bring the role back online. Then additionally set up a new Storage Replica partnership as the now active disk replicating to the one where failure was simulated as the new status quo.

All of that takes less than a minute to go through, but it still requires manual work instead of automatic failover as I understood it to function.

My question is, did I misinterpret how the system is supposed to work and that this is just how things are for the failover scenario described above? Or do you guys think there's a configuration error somewhere along the line?

As a sidenote, when I try to manually reverse partnership (with all nodes and disks up) using the Set-SRPartnership powershell cmdlet, the source and destination disks stay the same after the cmdlet runs its course.

Here's the schematic I made while drafting the system, to help clarify the setup: https://i.imgur.com/5STByzZ.png

Any and all input is greatly appreciated, even pointers to a better place to post this question to :)

Please be gentle since I'm just a student and this is my first real project assignment at the company I work at, though I can't ask any other colleagues in IT since none of them have any experience with clustering.

minikube on macos with m1 darwin without docker

Posted: 28 Apr 2022 01:02 AM PDT

Ok, in my search for a development environment without docker desktop, I am exploring minikube.

The issue is that minikube (at the time of writing) cannot run on macos with m1 chip because hyperkit is not supported yet on darwin acrhitecture.

I also tried to use the (as of the time of writing) experimental podman driver without success.

Has anybody been successful at running minikube on macos (Monterey) with M1 chip without docker desktop?

Routing Internet Traffic through VPN tunnel

Posted: 28 Apr 2022 01:02 AM PDT

I am relatively new to the topic, so please do not mind if I ask simple (or stupid) questions. I own a Raspberry Pi 3B and installed and configured an OpenVPN server on it. Therefore I followed this openvpn community guide: https://openvpn.net/community-resources/how-to/ I am using a Windows machine to connect to this server, which works perfectly fine. I tried to configure the server, such that my IPv4 internet traffic is routed through the tunnel. The problem is, during connection to the VPN server is established, IPv4 websites are not loading at all. Furthermore IPv6 traffic still slips through, such that IPv6 websites load as usual. Please find server config, client config, iptables rulesets and IP routing table attached. In addition to this, I configured the NAT according to the community guide with the command

iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE  

So the problem in the end is the correct routing. Thank you all in advance for your help!

Cheers, Patrick

P.S.: 192.168.2.1 is the IP of the W-Lan router my Pi is connected to via ethernet.

Server config

port 1194  proto udp  dev tun  ca /etc/openvpn/server/ca.crt  cert /etc/openvpn/server/server.crt  key /etc/openvpn/server/server.key  dh /etc/openvpn/server/dh.pem  server 10.8.0.0 255.255.255.0  ifconfig-pool-persist /var/log/openvpn/ipp.txt  keepalive 10 120  tls-auth /etc/openvpn/server/ta.key 0   cipher AES-256-CBC  user nobody  group nogroup  persist-key  persist-tun  status /var/log/openvpn/openvpn-status.log  verb 3  push "redirect-gateway local def1"  push "dhcp-options DNS 10.8.0.1"  

Client config

client  dev tun  proto udp  remote 192.168.2.129 1194  resolv-retry infinite  nobind  persist-key  persist-tun  ca ca.crt  cert client.crt  key client.key  remote-cert-tls server  tls-auth ta.key 1  cipher AES-256-CBC  verb 3  redirect-gateway local def1  

IPv4 rules

*filter  :INPUT ACCEPT [0:0]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [0:0]  -A INPUT -i lo -j ACCEPT  -A INPUT -s 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable  -A INPUT -p icmp -m state --state NEW -m icmp --icmp-type 8 -j ACCEPT  -A INPUT -p icmp -m state --state RELATED,ESTABLISHED -j ACCEPT  -A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 22 -j ACCEPT  -A INPUT -i eth0 -p udp -m state --state NEW,ESTABLISHED -m udp --dport 1194 -j ACCEPT  -A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 1194 -j ACCEPT  -A INPUT -i eth0 -p udp -m state --state ESTABLISHED -m udp --sport 53 -j ACCEPT  -A INPUT -i eth0 -p tcp -m state --state ESTABLISHED -m tcp --sport 80 -j ACCEPT  -A INPUT -i eth0 -p tcp -m state --state ESTABLISHED -m tcp --sport 443 -j ACCEPT  -A INPUT -i tun0 -j ACCEPT  -A INPUT -m limit --limit 3/min -j LOG --log-prefix "iptables_INPUT_denied: "  -A INPUT -j REJECT --reject-with icmp-port-unreachable  -A FORWARD -m limit --limit 3/min -j LOG --log-prefix "iptables_FORWARD_denied: "  -A FORWARD -j REJECT --reject-with icmp-port-unreachable  -A OUTPUT -o lo -j ACCEPT  -A OUTPUT -p icmp -j ACCEPT  -A OUTPUT -o eth0 -p tcp -m state --state ESTABLISHED -m tcp --sport 22 -j ACCEPT  -A OUTPUT -o eth0 -p udp -m state --state ESTABLISHED -m udp --sport 1194 -j ACCEPT  -A OUTPUT -o eth0 -p tcp -m state --state ESTABLISHED -m tcp --sport 1194 -j ACCEPT  -A OUTPUT -o eth0 -p udp -m state --state NEW,ESTABLISHED -m udp --dport 53 -j ACCEPT  -A OUTPUT -o eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 80 -j ACCEPT  -A OUTPUT -o eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 443 -j ACCEPT  -A OUTPUT -o tun0 -j ACCEPT  -A OUTPUT -m limit --limit 3/min -j LOG --log-prefix "iptables_OUTPUT_denied: "  -A OUTPUT -j REJECT --reject-with icmp-port-unreachable  COMMIT  

IPv6 rules

*filter  :INPUT ACCEPT [0:0]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [0:0]  -A INPUT -j REJECT --reject-with icmp6-port-unreachable  -A FORWARD -j REJECT --reject-with icmp6-port-unreachable  -A OUTPUT -j REJECT --reject-with icmp6-port-unreachable  COMMIT  

IP Routing table

Target       Router            Genmask             Flags     MSS Window irtt Iface  0.0.0.0      192.168.2.1       0.0.0.0             UG          0 0            0   eth0  10.8.0.0     10.8.0.2          255.255.255.0       UG          0 0            0   tun0  10.8.0.0.2    0.0.0.0          255.255.255.225     UH          0 0            0   tun0  192.168.2.1   0.0.0.0          255.255.255.0       U           0 0            0   eth0  

freeradius mac authentication error (mac address not found?)

Posted: 28 Apr 2022 01:28 AM PDT

So I set up a freeradius 3.0 server on Debian 9 following the official documentation here and here. I have an authorized_mac file with the addresses of my devices and in the file /etc/freeradius/3.0/mods-enabled/files I indicated which file my mac addresses are in:

files authorized_macs {      # The default key attribute to use for matches.  The content      # of this attribute is used to match the "name" of the      # entry.      key = "%{Calling-Station-ID}"        usersfile = ${confdir}/authorized_macs        #  If you want to use the old Cistron 'users' file      #  with FreeRADIUS, you should change the next line      #  to 'compat = cistron'.  You can the copy your 'users'      #  file from Cistron.      #compat = no  }  

My WiFi access point sends the MAC addresses to the radius server in the format 1A:2B:3C:4D:5E:6F but to be sure that the problem is not coming from there, my authorized_macs file looks like this:

1A:2B:3C:4D:5E:6F      Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access"    1a:2b:3c:4d:5e:6f      Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access"    1A2B3C4D5E6F      Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access"    1a2b3c4d5e6f      Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access"    1A-2B-3C-4D-5E-6F      Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access"    1a-2b-3c-4d-5e-6f      Reply-Message = "Device with MAC Address %{Calling-Station-Id} authorized for network access"  

So when I start the freeradius server in debug mode (freeradius -X) and try to connect to the SSID with my device, an error occurs:

[...] -- line 777  (0) pap: WARNING: No "known good" password found for the user.  Not setting Auth-Type  (0) pap: WARNING: Authentication will fail unless a "known good" password is available  (0)     [pap] = noop  (0)   } # authorize = ok  (0) ERROR: No Auth-Type found: rejecting the user via Post-Auth-Type = Reject  (0) Failed to authenticate the user  (0) Using Post-Auth-Type Reject  [...] -- line 783  

Full logs available here. For information, 10.42.0.7 is my freeradius server and 10.42.0.22 is my WiFi access point. The SSID is named "testtt".

TL;DR: The configuration is correct according to the official documentation. The WiFi access point and the freeradius are well connected to each other but the radius server seems not to know the addresses even though they have been given...


EDIT

Here is the end of the file /etc/freeradius/3.0/sites-enabled/default :

server {          authorize {                  preprocess                    # If cleaning up the Calling-Station-Id...                  rewrite_calling_station_id                    # Now check against the authorized_macs file                  authorized_macs                    if (!ok) {                          # No match was found, so reject                          reject                  }                  else {                          # The MAC address was found, so update Auth-Type                          # to accept this auth.                          update control {                                  Auth-Type := Accept                          }                  }          }  }  

how to require publickey and otp, or password and otp when logging in with ssh?

Posted: 28 Apr 2022 04:03 AM PDT

I'm trying to get ssh to work in a way where password auth can be skipped with a key, and in addition every login would be followed up with totp using google's libpam on my new debian 9 installation.

So far i've been able to get the first part working, where if i provide a key, the server asks me for the otp, but the way it is, i've had to comment out both @include common-auth and @include common-password to suppress the password prompt in /etc/pam.d/sshd.

Seems obvious then that if i do AuthenticationMethods publickey,keyboard-interactive:pam password,keyboard-interactive:pam in my sshd_config and i try logging in without a key it does not matter what password i provide since the password checking parts are commented out.

The logical way to solve this as it would seem to a novice like me, would be that i could define different pam methods or classes, and then somehow reference those in my sshd_config, but i cant seem to find any information regarding such an operation.

Is it even possible to accomplish this particular combo?

edit 1:

Tinkering further with this, it really does not make as much sense as i initially thought. If i comment out both @include common-auth and @include common-password, i can get publickey,keyboard-interactive:pam to not ask for password. If i now set AuthenticationMethods password for a specific user, that user is not able to log in at all due to every password being rejected, even if it really is the valid one. So logically then it seems sshd password auth method also uses the /etc/pam.d/sshd configs. if i dont comment those includes, keyboard-interactive:pam asks for password and verification code, but password auth method still fails for any user that has otp initialized (and would fail for all except i give google libpam the nullok option). Seems like password is just a crappy version of keyboard-interactive:pam that can only prompt for one input and thus always fails if there are more then one required inputs.

If i write my own pam.d module, is there any way to make ssh use it instead of /etc/pam.d/sshd?

edit 2:

Im starting to think that i cant do (password && otp) || (publickey && otp), because the public key is checked in a different place from the rest, and so unless i can define which pam config to use with AuthenticationMethods, or i can somehow send parameters/arguments to the pam module, knowing when to check both and when to only check otp seems impossible

pfSense: add multiple static IP/MAC bindings?

Posted: 28 Apr 2022 03:42 AM PDT

I have a pfSense router that handles some labs. It is configured such that DHCP only hands out IP addresses for machines listed in the static IP/MAC bindings list.

Whenever we upgrade a lab with new machines, I have to manually remove all of the old machines one-by-one, clicking the delete icon beside each entry. To make matters worse, I have to scroll down to the bottom of the page after each entry is removed!

Then I have to painstakingly add all of the new bindings, one-by-one, again, scrolling down to the bottom of the page after each addition.

If I have all of the MACs and IPs in a list, is there any way that is already built into pfSense to make all of these changes at once without have to work with each record individually? Perhaps something like a multi-line text box that would let me dump csv data in that could be parsed to update all entries at once?

modsecurity doesn't log all response bodies

Posted: 28 Apr 2022 12:00 AM PDT

I'm trying to get response body of every request. (200 or 500, etc.)

But mod_security doesn't put -E-- part (response body) in every request.

For example, for this request:

Request Body:

POST /accounts/login/ HTTP/1.1  Host: localhost:2021  Connection: keep-alive  Content-Length: 65  Accept: application/json, text/javascript, */*; q=0.01  Origin: http://localhost  X-Requested-With: XMLHttpRequest  User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36  Content-Type: application/x-www-form-urlencoded; charset=UTF-8  Referer: http://localhost/  Accept-Encoding: gzip, deflate  Accept-Language: en-US,en;q=0.8,fa;q=0.6  Cookie: *********  

Response Headers:

HTTP/1.1 200 OK  Date: Tue, 23 Aug 2016 09:59:30 GMT  Server: Apache/2.4.18 (Ubuntu)  Set-Cookie: ********  Content-Length: 12  Keep-Alive: timeout=5, max=100  Connection: Keep-Alive  Content-Type: text/html; charset=UTF-8  

Form Data:

_method=POST&data[username]=username&data[password]=password  

This is what audit logs:

--45fa656f-A--  [23/Aug/2016:14:29:30 +0430] V7wegn8AAQEAACNhh6IAAAAD 127.0.0.1 50392 127.0.0.1 80  --45fa656f-B--  POST /accounts/login/ HTTP/1.1  Host: localhost  Connection: keep-alive  Content-Length: 65  Accept: application/json, text/javascript, */*; q=0.01  Origin: http://localhost  X-Requested-With: XMLHttpRequest  User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36  Content-Type: application/x-www-form-urlencoded; charset=UTF-8  Referer: http://localhost/  Accept-Encoding: gzip, deflate  Accept-Language: en-US,en;q=0.8,fa;q=0.6  Cookie: *********    --45fa656f-C--  _method=POST&data[username]=username&data[password]=password  --45fa656f-F--  HTTP/1.1 200 OK  Set-Cookie: *******  Content-Length: 12  Keep-Alive: timeout=5, max=100  Connection: Keep-Alive  Content-Type: text/html; charset=UTF-8    --45fa656f-H--  Apache-Handler: application/x-httpd-php  Stopwatch: 1471946370098737 633846 (- - -)  Stopwatch2: 1471946370098737 633846; combined=416, p1=190, p2=217, p3=0, p4=0, p5=9, sr=0, sw=0, l=0, gc=0  Producer: ModSecurity for Apache/2.9.0 (http://www.modsecurity.org/).  Server: Apache/2.4.18 (Ubuntu)  Engine-Mode: "ENABLED"    --45fa656f-Z--  

Here is config file:

# -- Rule engine initialization ----------------------------------------------    # Enable ModSecurity, attaching it to every transaction. Use detection  # only to start with, because that minimises the chances of post-installation  # disruption.  #  SecRuleEngine On      # -- Request body handling ---------------------------------------------------    # Allow ModSecurity to access request bodies. If you don't, ModSecurity  # won't be able to see any POST parameters, which opens a large security  # hole for attackers to exploit.  #  SecRequestBodyAccess On      # Enable XML request body parser.  # Initiate XML Processor in case of xml content-type  #  SecRule REQUEST_HEADERS:Content-Type "text/xml" \       "id:'200000',phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML"    # Enable JSON request body parser.  # Initiate JSON Processor in case of JSON content-type; change accordingly  # if your application does not use 'application/json'  #  SecRule REQUEST_HEADERS:Content-Type "application/json" \       "id:'200001',phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON"    # Maximum request body size we will accept for buffering. If you support  # file uploads then the value given on the first line has to be as large  # as the largest file you are willing to accept. The second value refers  # to the size of data, with files excluded. You want to keep that value as  # low as practical.  #  SecRequestBodyLimit 13107200  SecRequestBodyNoFilesLimit 131072    # Store up to 128 KB of request body data in memory. When the multipart  # parser reachers this limit, it will start using your hard disk for  # storage. That is slow, but unavoidable.  #  SecRequestBodyInMemoryLimit 131072    # What do do if the request body size is above our configured limit.  # Keep in mind that this setting will automatically be set to ProcessPartial  # when SecRuleEngine is set to DetectionOnly mode in order to minimize  # disruptions when initially deploying ModSecurity.  #  SecRequestBodyLimitAction Reject    # Verify that we've correctly processed the request body.  # As a rule of thumb, when failing to process a request body  # you should reject the request (when deployed in blocking mode)  # or log a high-severity alert (when deployed in detection-only mode).  #  SecRule REQBODY_ERROR "!@eq 0" \  "id:'200002', phase:2,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:'%{reqbody_error_msg}',severity:2"    # By default be strict with what we accept in the multipart/form-data  # request body. If the rule below proves to be too strict for your  # environment consider changing it to detection-only. You are encouraged  # _not_ to remove it altogether.  #  SecRule MULTIPART_STRICT_ERROR "!@eq 0" \  "id:'200003',phase:2,t:none,log,deny,status:400, \  msg:'Multipart request body failed strict validation: \  PE %{REQBODY_PROCESSOR_ERROR}, \  BQ %{MULTIPART_BOUNDARY_QUOTED}, \  BW %{MULTIPART_BOUNDARY_WHITESPACE}, \  DB %{MULTIPART_DATA_BEFORE}, \  DA %{MULTIPART_DATA_AFTER}, \  HF %{MULTIPART_HEADER_FOLDING}, \  LF %{MULTIPART_LF_LINE}, \  SM %{MULTIPART_MISSING_SEMICOLON}, \  IQ %{MULTIPART_INVALID_QUOTING}, \  IP %{MULTIPART_INVALID_PART}, \  IH %{MULTIPART_INVALID_HEADER_FOLDING}, \  FL %{MULTIPART_FILE_LIMIT_EXCEEDED}'"    # Did we see anything that might be a boundary?  #  SecRule MULTIPART_UNMATCHED_BOUNDARY "!@eq 0" \  "id:'200004',phase:2,t:none,log,deny,msg:'Multipart parser detected a possible unmatched boundary.'"    # PCRE Tuning  # We want to avoid a potential RegEx DoS condition  #  SecPcreMatchLimit 1000  SecPcreMatchLimitRecursion 1000    # Some internal errors will set flags in TX and we will need to look for these.  # All of these are prefixed with "MSC_".  The following flags currently exist:  #  # MSC_PCRE_LIMITS_EXCEEDED: PCRE match limits were exceeded.  #  SecRule TX:/^MSC_/ "!@streq 0" \          "id:'200005',phase:2,t:none,deny,msg:'ModSecurity internal error flagged: %{MATCHED_VAR_NAME}'"      # -- Response body handling --------------------------------------------------    # Allow ModSecurity to access response bodies.   # You should have this directive enabled in order to identify errors  # and data leakage issues.  #   # Do keep in mind that enabling this directive does increases both  # memory consumption and response latency.  #  SecResponseBodyAccess On    # Which response MIME types do you want to inspect? You should adjust the  # configuration below to catch documents but avoid static files  # (e.g., images and archives).  #  SecResponseBodyMimeType text/plain text/html text/xml application/json    # Buffer response bodies of up to 512 KB in length.  SecResponseBodyLimit 524288    # What happens when we encounter a response body larger than the configured  # limit? By default, we process what we have and let the rest through.  # That's somewhat less secure, but does not break any legitimate pages.  #  SecResponseBodyLimitAction ProcessPartial      # -- Filesystem configuration ------------------------------------------------    # The location where ModSecurity stores temporary files (for example, when  # it needs to handle a file upload that is larger than the configured limit).  #   # This default setting is chosen due to all systems have /tmp available however,   # this is less than ideal. It is recommended that you specify a location that's private.  #  SecTmpDir /tmp/    # The location where ModSecurity will keep its persistent data.  This default setting   # is chosen due to all systems have /tmp available however, it  # too should be updated to a place that other users can't access.  #  SecDataDir /tmp/      # -- File uploads handling configuration -------------------------------------    # The location where ModSecurity stores intercepted uploaded files. This  # location must be private to ModSecurity. You don't want other users on  # the server to access the files, do you?  #  #SecUploadDir /opt/modsecurity/var/upload/    # By default, only keep the files that were determined to be unusual  # in some way (by an external inspection script). For this to work you  # will also need at least one file inspection rule.  #  #SecUploadKeepFiles RelevantOnly    # Uploaded files are by default created with permissions that do not allow  # any other user to access them. You may need to relax that if you want to  # interface ModSecurity to an external program (e.g., an anti-virus).  #  #SecUploadFileMode 0600      # -- Debug log configuration -------------------------------------------------    # The default debug log configuration is to duplicate the error, warning  # and notice messages from the error log.  #  SecDebugLog /tmp/modsecurity_debug.log  SecDebugLogLevel 9      # -- Audit log configuration -------------------------------------------------    # Log the transactions that are marked by a rule, as well as those that  # trigger a server error (determined by a 5xx or 4xx, excluding 404,    # level response status codes).  #  SecAuditEngine On  SecAuditLogRelevantStatus "^(?:5|4(?!04))"    # Log everything we know about a transaction.  SecAuditLogParts ABCIJDEFHZ    # Use a single file for logging. This is much easier to look at, but  # assumes that you will use the audit log only ocassionally.  #  SecAuditLogType Serial  SecAuditLog /var/log/apache2/modsec_audit.log    # Specify the path for concurrent audit logging.  #SecAuditLogStorageDir /opt/modsecurity/var/audit/      # -- Miscellaneous -----------------------------------------------------------    # Use the most commonly used application/x-www-form-urlencoded parameter  # separator. There's probably only one application somewhere that uses  # something else so don't expect to change this value.  #  SecArgumentSeparator &    # Settle on version 0 (zero) cookies, as that is what most applications  # use. Using an incorrect cookie version may open your installation to  # evasion attacks (against the rules that examine named cookies).  #  SecCookieFormat 0    # Specify your Unicode Code Point.  # This mapping is used by the t:urlDecodeUni transformation function  # to properly map encoded data to your language. Properly setting  # these directives helps to reduce false positives and negatives.  #  SecUnicodeMapFile unicode.mapping 20127    # Improve the quality of ModSecurity by sharing information about your  # current ModSecurity version and dependencies versions.  # The following information will be shared: ModSecurity version,  # Web Server version, APR version, PCRE version, Lua version, Libxml2  # version, Anonymous unique id for host.  SecStatusEngine On  

Whats wrong? Does it catch response type text/html; charset=UTF-8?

Configuration to connect an Acess Point into a pfSense Network

Posted: 28 Apr 2022 03:00 AM PDT

At the moment I have an wired network provided by my pfSense router. I bought an Access point (tl-wa801nd access point), in order to have a wireless network in my wired network. In my computer host, I am able to connect me to the new wireless network, nevertheless I am not capable to connect me to the internet. The configuration is this:

In my pfSense Router (in Services DHCP server):

Subnet: 192.168.1.0  Subnet mask: 255.255.255.0  Available Range: 192.168.1.1 - 192.168.1.254  Range: 192.168.1.5 - 192.168.1.254  DNS servers: 208.67.222.222, 208.67.220.220  Gateway: 192.168.1.1  

and the other options have default values.

In my tp-link Access Point:

(LAN)  Type: Static IP  IP Adress: 192.168.1.4  Subnet Mask: 255.255.255.0  Gateway: 192.168.1.1  Allow remote access: no    (DHCP Settings)  DHCP Server: Disable  Start IP Address: 192.168.1.5  End IP Address: 192.168.1.254  Address Lease Time: 1 Minute  Default Gateway: 192.168.1.1  Default Domain:   Primary DNS: 208.67.222.222  Secondary DNS: 208.67.220.220  

I think this has to be with pfSense, since when I connect the AP into another home router, I am able to access internet. I am not sure how the configuration should be and why, maybe both Router and Access Point must have same values, but I don't know. Thanks in advance

Netlogon - Domain Trust Secure Channel issues - Only on some DCs

Posted: 28 Apr 2022 02:03 AM PDT

We have a 2 domain environment. We were having issues with slow connections, authentication failures, and hung resources only during OFF-PEAK hours when there were very few users logged on.

The issue occurred when a user from DOMAIN A is accessing a resource located on DOMAIN B and is using ntlm authentication. There are no issues with users from DOMAIN A accessing resources in DOMAIN A, or with users from DOMAIN B accessing resources in DOMAIN B.

We were able to track down the problem to the secure channels that are used for netlogon traffic. When a resource from domain B had a secure channel with one particular DC (I'll call it DC-B1), then everything worked fine. We can follow the traffic chain from client(A)->resource(B)->DC-B1(B)->DC-A1(A) (for authentication) and then back again. However, if the resource server in B had a secure channel with any of the other DC's in DOMAIN B, the authentication would hang and never complete.

So it looks like with the exception of DC-B1, every DC in DOMAIN B is having trouble talking creating a domain trust secure channel with DOMAIN A. To test, we ran nltest /sc_verify:DOMAINA from each DC in DOMAIN B.

When run from DC-B1, the response was instantaneous. When run from any other DC on domain B, it hung for about 40 seconds before showing a success (never showed an error, just took a long time).

Any ideas on why some DC's would be struggling with establishing and using the domain trust secure channel and another DC in the same domain never has an issue?

For what it's worth, the DC that works is server 2008, the ones that don't work are server 2012 R2, however the problem existed on some domain controllers before migrated to 2012 R2, we just didn't pin-point the issue until after we were done migrating them.

Thanks for the help.

Edit: Additional Information...

Compared a weekend's worth of NetLogon.log files for each of the Domain Controllers...

Every

[LOGON] SamLogon: Transitive Network logon of DOMAINA\testuser Entered  

record in the DC-B1 log (this is the good DC) had a corresponding

[LOGON] SamLogon: Transitive Network logon of DOMAINA\testuser Returns 0x0  

however on the other DCs in Domain B each return had one of the following 3 errors:

[LOGON] ... DOMAINA\testuser ... Returns 0xC0020017  [LOGON] ... DOMAINA\testuser ... Returns 0xC0020050  [LOGON] ... DOMAINA\testuser ... Returns 0xC000005E  

And here is how often each of the different errors occured:

77% of errors were: 0xC0020017 RPC SERVER UNAVAILABLE  21% of errors were: 0xC0020050 RPC CALL CANCELED   1% of errors were: 0xC000005E NO LOGON SERVERS AVAILABLE   0% of returns were: 0x0 (no error)  

We compared the all the security setting between the DCs that do not work and the one that does but couldn't find anything that would cause the RPC issues. Any suggestions on where we could look next? We are confused as to why the 2008 domain controller in "B" would have no trouble talking to 2012 DCs in "A", but the 2012 Dcs in "B" cannot use pass through authentication to "A".

Edit: Additional Requested Information...

Test run from DC-B2 & DC-B3 (same results) (pass through authentication originating here does not work)

C:\>nltest /dsgetdc:DOMAINA.local             DC: \\DC-A3.DOMAINA.local        Address: \\555.555.555.127       Dom Guid: 9f3a0668-c245-4493-be03-0f7edf534d27       Dom Name: DOMAINA.local    Forest Name: DOMAINA.local   Dc Site Name: Company  Our Site Name: Company          Flags: GC DS LDAP KDC TIMESERV WRITABLE DNS_DC DNS_DOMAIN DNS_FOREST CLOSE_SITE FULL_SECRET WS DS_8 DS_9  The command completed successfully  

Edit: Additional Information...

Results from PortQry from Domain B -> Domain A (GC DC)

TCP port 135  (epmap service):      LISTENING  TCP port 389  (ldap service):       LISTENING  UDP port 389  (unknown service):    LISTENING or FILTERED  TCP port 636  (ldaps service):      LISTENING  TCP port 3268 (msft-gc service):    FILTERED  TCP port 3269 (msft-gc-ssl service):    FILTERED  TCP port 53   (domain service):     NOT LISTENING  UDP port 53   (domain service):     NOT LISTENING  TCP port 88   (kerberos service):   LISTENING  UDP port 88   (kerberos service):   LISTENING or FILTERED  TCP port 445  (microsoft-ds service):   LISTENING  UDP port 137  (netbios-ns service):     LISTENING or FILTERED  UDP port 138  (netbios-dgm service):    LISTENING or FILTERED  TCP port 139  (netbios-ssn service):    LISTENING  TCP port 42   (nameserver service):     FILTERED  

kadmin interface not working - immediately closes connection

Posted: 28 Apr 2022 03:00 AM PDT

So far I've been doing most of the administration for kerberos with kadmin.local, however, I'm trying to migrate over to using the remote kadmin as it would be better practice and all.

What I'm seeing is this:

esr@cpt2:~$ kadmin -p 'esr/admin'  Authenticating as principal esr/admin with password.  Password for esr/admin@DOMAIN.EDU:   esr@cpt2:~$  

i.e.,login happens perfectly, but the connection is immediately closed.

On the server side:

Jan 08 12:51:02 00-kdc krb5kdc[9729](info): AS_REQ (4 etypes {18 17 16 23}) X.X.X.X: NEEDED_PREAUTH: esr/admin@DOMAIN.EDU for kadmin/ldap-master.domain.edu@DOMAIN.EDU, Additional pre-authentication required  Jan 08 12:51:05 00-kdc krb5kdc[9729](info): AS_REQ (4 etypes {18 17 16 23}) X.X.X.X: ISSUE: authtime 1389207065, etypes {rep=18 tkt=18 ses=18}, esr/admin@DOMAIN.EDU for kadmin/00-kdc.domain.edu@DOMAIN.EDU    ==> /var/log/krb5kdc/kadmin.log <==  Jan 08 12:51:05 00-kdc kadmind[9720](Error): TCP client X.X.X.X.41541 wants 2147484348 bytes, cap is 1048572  Jan 08 12:51:05 00-kdc kadmind[9720](info): closing down fd 333  

the error wants 2147484348 bytes, cap is 1048572 immediately jumped out at me, but it's proving incredibly tough to track down. I found http://krbdev.mit.edu/rt/Ticket/Display.html?id=3923 but that seems to have been resolved ages ago.

Additionally, I'm using

Package: krb5-admin-server  Version: 1.10+dfsg~beta1-2ubuntu0.3  Package: krb5-kdc  Version: 1.10+dfsg~beta1-2ubuntu0.3  

Client connection trace:

esr$ KRB5_TRACE=/dev/stdout kadmin  Authenticating as principal esr/admin@DOMAIN.EDU with password.  [2913] 1389633823.366797: Initializing MEMORY:kadm5_0 with default princ esr/admin@DOMAIN.EDU  [2913] 1389633823.366900: Getting initial credentials for esr/admin@DOMAIN.EDU  [2913] 1389633823.367196: Setting initial creds service to kadmin/ldap-master.domain.edu@DOMAIN.EDU  [2913] 1389633823.367314: Sending request (199 bytes) to DOMAIN.EDU  [2913] 1389633823.367417: Resolving hostname ldap-master.domain.edu  [2913] 1389633823.367562: Sending initial UDP request to dgram X.X.X.X:88  [2913] 1389633823.371591: Received answer from dgram X.X.X.X:88  [2913] 1389633823.410550: Response was not from master KDC  [2913] 1389633823.410581: Received error from KDC: -1765328359/Additional pre-authentication required  [2913] 1389633823.410619: Processing preauth types: 136, 19, 2, 133  [2913] 1389633823.410636: Selected etype info: etype aes256-cts, salt "DOMAIN.EDUesradmin", params ""  [2913] 1389633823.410640: Received cookie: MIT  Password for esr/admin@DOMAIN.EDU:  [2913] 1389633826.379096: AS key obtained for encrypted timestamp: aes256-cts/4485  [2913] 1389633826.409058: Encrypted timestamp (for 1389633826.408987): plain <snip>  [2913] 1389633826.409100: Preauth module encrypted_timestamp (2) (flags=1) returned: 0/Success  [2913] 1389633826.409105: Produced preauth for next request: 133, 2  [2913] 1389633826.409123: Sending request (294 bytes) to DOMAIN.EDU  [2913] 1389633826.409142: Resolving hostname ldap-master.domain.edu  [2913] 1389633826.409203: Sending initial UDP request to dgram X.X.X.X:88  [2913] 1389633826.506049: Received answer from dgram X.X.X.X:88  [2913] 1389633826.550573: Response was not from master KDC  [2913] 1389633826.550610: Processing preauth types: 19  [2913] 1389633826.550618: Selected etype info: etype aes256-cts, salt "DOMAIN.EDUesradmin", params ""  [2913] 1389633826.550623: Produced preauth for next request: (empty)  [2913] 1389633826.550632: AS key determined by preauth: aes256-cts/4485  [2913] 1389633826.550688: Decrypted AS reply; session key is: aes256-cts/13A4  [2913] 1389633826.550706: FAST negotiation: available  [2913] 1389633826.550744: Initializing MEMORY:kadm5_0 with default princ esr/admin@DOMAIN.EDU  [2913] 1389633826.550753: Removing esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU from MEMORY:kadm5_0  [2913] 1389633826.550760: Storing esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU in MEMORY:kadm5_0  [2913] 1389633826.550770: Storing config in MEMORY:kadm5_0 for kadmin/ldap-master.domain.edu@DOMAIN.EDU: fast_avail: yes  [2913] 1389633826.550780: Removing esr/admin@DOMAIN.EDU -> krb5_ccache_conf_data/fast_avail/kadmin\/ldap-master.domain.edu\@DOMAIN.EDU@X-CACHECONF: from MEMORY:kadm5_0  [2913] 1389633826.550787: Storing esr/admin@DOMAIN.EDU -> krb5_ccache_conf_data/fast_avail/kadmin\/ldap-master.domain.edu\@DOMAIN.EDU@X-CACHECONF: in MEMORY:kadm5_0  [2913] 1389633826.575550: Getting credentials esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU using ccache MEMORY:kadm5_0  [2913] 1389633826.575589: Retrieving esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU from MEMORY:kadm5_0 with result: 0/Success  [2913] 1389633826.575641: Creating authenticator for esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU, seqnum 982754712, subkey aes256-cts/33D5, session key aes256-cts/13A4  [2913] 1389633826.578730: Getting credentials esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU using ccache MEMORY:kadm5_0  [2913] 1389633826.578775: Retrieving esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU from MEMORY:kadm5_0 with result: 0/Success  [2913] 1389633826.578816: Creating authenticator for esr/admin@DOMAIN.EDU -> kadmin/ldap-master.domain.edu@DOMAIN.EDU, seqnum 799315236, subkey aes256-cts/E55C, session key aes256-cts/13A4  

Creating a fallback error page for nginx when root directory does not exist

Posted: 27 Apr 2022 11:05 PM PDT

I have set up an any-domain config on my nginx server - to reduce the amount of work needed when I open a new site/domain. This config allows me to simply create a folder in /usr/share/nginx/sites/ with the name of the domain/subdomain and then it just works.™

server {      # Catch all domains starting with only "www." and boot them to non "www." domain.      listen 80;      server_name ~^www\.(.*)$;      return 301 $scheme://$1$request_uri;  }    server {      # Catch all domains that do not start with "www."      listen 80;      server_name ~^(?!www\.).+;      client_max_body_size 20M;        # Send all requests to the appropriate host      root /usr/share/nginx/sites/$host;        index index.html index.htm index.php;      location / {           try_files $uri $uri/ =404;      }        recursive_error_pages on;      error_page 400 /errorpages/error.php?e=400&u=$uri&h=$host&s=$scheme;      error_page 401 /errorpages/error.php?e=401&u=$uri&h=$host&s=$scheme;      error_page 403 /errorpages/error.php?e=403&u=$uri&h=$host&s=$scheme;      error_page 404 /errorpages/error.php?e=404&u=$uri&h=$host&s=$scheme;      error_page 418 /errorpages/error.php?e=418&u=$uri&h=$host&s=$scheme;      error_page 500 /errorpages/error.php?e=500&u=$uri&h=$host&s=$scheme;      error_page 501 /errorpages/error.php?e=501&u=$uri&h=$host&s=$scheme;      error_page 503 /errorpages/error.php?e=503&u=$uri&h=$host&s=$scheme;      error_page 504 /errorpages/error.php?e=504&u=$uri&h=$host&s=$scheme;        location ~ \.(php|html) {          include /etc/nginx/fastcgi_params;          fastcgi_pass 127.0.0.1:9000;          fastcgi_intercept_errors on;      }  }  

However there is one issue that I'd like to resolve, and that is when a domain that doesn't have a folder in the sites directory, nginx throws an internal 500 error page because it cannot redirect to /errorpages/error.php as it doesn't exist.

How can I create a fallback error page that will catch these failed requests?

Nginx/Apache not accessible from outside, SSH is accessible. Firewall disabled

Posted: 28 Apr 2022 04:03 AM PDT

I have ports 37000-37100 redirected to my computer. SSH is accessible when listening on any of these ports (by default I'm using 37022, but tried on 37080 and it's OK).

I can only access Nginx on a local IP (example: http : // 192 .168 .49 .198 : 37080), though. When I try to connect from outside (http : // our_network's_ip:37080), the browser times out after a while. PLEASE READ UPDATE BELOW. I installed Apache just to make sure and it's the same. Stock install, only ports are changed.

The UTF is disabled.

I've done it a hundred times on various home networks and it always worked. This time it's an office network and I'm not the person who configures the router. I'd say the problem's there, but SSH is working... .

Ubuntu 12.10.

Any ideas?

UPDATE: actually I can access my computer from outside networks, I only can't access my computer FROM my own network when I use my network's "external" IP.

Unable to knock to Areca ARC-6020

Posted: 27 Apr 2022 11:42 PM PDT

Several months ago we got an RAID Areca ARC 6020, without any cable and instruction how to setup that. This device is real dinosaur but we would like to make it workable as storage array in our team.

I tried to ping that over network but not succeeded with that. I even don't know what IP it has. I am not geek in system networking to define it and will be appreciated in any instruction how to compel this heap of iron to begin to work :)

Box that hosted ARC does have only LAN and strange port with many contacts looks like SCSI but more longer.

Setup ejabberd with SQL Server 2008

Posted: 28 Apr 2022 02:03 AM PDT

Here's what I have got so far.

  1. Windows 2008 Server 64 bit.
  2. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe.
  3. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile.
  4. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server.
  5. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server.

After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that.

  1. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP)
  2. Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc}
  3. Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}.
  4. After making these changes, I restarted. No errors are found in the log files.
  5. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this.

I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

How to view linux kernel logs live?

Posted: 28 Apr 2022 03:18 AM PDT

I have a kernel module logging input of some sensor while I work with it. I want to see if there is a command that outputs /var/log/messages (for example) but waits for more logs to come. That is, some program like dmesg except that it stays on and keeps printing newly-come logs.

FileZilla Server Configuration Problems

Posted: 27 Apr 2022 11:05 PM PDT

I've set-up FileZilla server a Windows 2008 Machine, I then created the user, password and added a share folder which I set to Home Directory.

I then connect to the server from the client computer

Status: Connecting to {IP}  Status: Connection established, waiting for welcome message...  Response:   220-Welcome To {NAME} FTP  Response:   220 {DOMAIN}  Command:    USER {USER}  Response:   331 Password required for {USER}  Command:    PASS *********  Response:   230 Logged on  Status: Connected  Status: Retrieving directory listing...  Command:    PWD  Response:   257 "/" is current directory.  Command:    TYPE I  Response:   200 Type set to I  Command:    PASV  Response:   227 Entering Passive Mode ({}DATA)  Command:    MLSD  

The connection works fine, however no remote directory is selected, it shows as "/" however uploading any file fails.

Any suggestions on how to debug this more?

Shorewall drop all incoming traffic from one internet IP except for all local host except two

Posted: 28 Apr 2022 12:00 AM PDT

How i can block all incoming traffic from one internet IP for the local network, except for two host?

DROP all inet:78.31.8.0/24 - -

The previous rule block all the incomming traffic from internet, but, how can allow the exception for two host?

No comments:

Post a Comment