Wednesday, December 29, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Docker containers no internet after adding a static IP, netplan (host)

Posted: 29 Dec 2021 05:57 AM PST

I am a newbie with netplan.
The context is that I want to add a failover IP (provided by OVH) to one of my server to provide HA (this IP will be failover/moved between all of my servers, only one server will hold this IP).
I followed this guide to configure my IP failover on my servers: https://docs.ovh.com/us/en/public-cloud/configure_a_failover_ip/#ubuntu-2004_1 (my servers are running under ubuntu)

The problem is when I add this configuration to my server AND that the IP Failover is not hold by the server then all docker containers in my server doesn't have internet (but the host does..).
And the only server that has the Failover IP works perfectly (docker has internet)

My servers configuraion looks like this now (with the IP configured):
/etc/netplan/50-cloud-init.yaml

# This file is generated from information provided by the datasource.  Changes  # to it will not persist across an instance reboot.  To disable cloud-init's  # network configuration capabilities, write a file  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:  # network: {config: disabled}  network:      version: 2      ethernets:          ens3:              accept-ra: false              addresses:              - xxxx:xxxx:xxx:xxx::xxx/56 # maybe sensitive data              - X.X.X.X/32 # MY FAILOVER IP              dhcp4: true              match:                  macaddress: xx:xx:xx:xx:xx:xx # blur my macaddr, idk if its sensitive data              mtu: 1500              nameservers:                  addresses:                  - 213.186.33.99 # idk what this is                  search: []              routes:              -   to: ::/0                  via: x:x:x:x::1 # maybe sensitive data              set-name: ens3  

How to enable Windows Hello on stand-alone server

Posted: 29 Dec 2021 05:41 AM PST

Does anyone know how I can enable Windows Hello facial sign-on a Windows 2019 stand-alone server?

I am the administrator of this stand-alone server, and have installed the Windows Biometric Framework, enabled various Windows Hello for Business group policy settings.

I have installed %windir%\system32\WinBioPlugIns\FaceDriver\HelloFace.inf.

These are my group policy settings:

  • Allow the use of biometrics: enabled
  • Allows users to log on using biometrics: enabled
  • Configure enhanced anti-spoofing: disabled
  • Use biometrics: enabled
  • Use Windows Hello for Business: enabled
  • Use Windows Hello for Business certificates as smart card certificates: disabled
  • Use certificate for on-premises authentication: enabled

This stand-alone system does not have any roles installed, besides Storage Services and Hyper-V. There is no Active Directory. I don't see my deployment scenario covered.

So far, I'm unable to see any face sign-on, except the following.

I've tested the webcam on a stand-alone (not joined to any domain) laptop, and on that laptop, face sign-on works.

Does anyone know what I'm missing?

Sign-on options

Docker creates additional bridge in wrong IP range

Posted: 29 Dec 2021 05:39 AM PST

I followed the answers here and changed the daemon.json to:

{    "bip": "10.200.0.1/24",    "default-address-pools": [        { "base":"10.201.0.0/16","size":24 },        { "base":"10.202.0.0/16","size":24 }    ]  }    

as I wanted to change the IP range used by the docker networking. However, when I start there is still a bridge created belonging to the IP range I wanted to move away from:

8: br-6c870af107ba: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default       link/ether 02:42:c0:ef:44:7e brd ff:ff:ff:ff:ff:ff      inet 172.18.0.1/16 brd 172.18.255.255 scope global br-6c870af107ba         valid_lft forever preferred_lft forever    

How can I prevent docker from creating this bridge interface?

How to show a default page on a protected website running on IIS

Posted: 29 Dec 2021 04:57 AM PST

I can't figure out the correct syntax in my web.config to allow the general public to see the index.aspx as a default page and for any other page to go thru the login page.

If I use www.mywebsite.com/index.aspx the page is displayed fine and I don't have to go thru the login process. But if I try www.mywebsite.com I am being redirected to the login.aspx page. Is there a way to display index.aspx if no specific page is given?

 <!-- Allow the login page -->   <location path="Login.aspx">      <system.web>        <authorization>          <allow users="*" />        </authorization>      </system.web>    </location>   <!-- Allow the index.aspx page -->    <location path="Index.aspx">      <system.web>        <authorization>          <allow users="*" />        </authorization>      </system.web>    </location>  <!-- deny the root folder -->    <location path="">      <system.web>        <authorization>          <deny users="?" />        </authorization>      </system.web>    </location>     ....     <authentication mode="Forms">        <forms name="GI" loginUrl="/Login.aspx" defaultUrl="/default.aspx" protection="All" timeout="180" />      </authentication>    

tc-mirred, redirecting flow matches to a dummy interface

Posted: 29 Dec 2021 04:53 AM PST

I'm scaling a security inspection device. The rate limiting step is a single threaded process that analyses and extracts data from network flows.

As a short-term goal, I would like to scale this by load-balancing from the physical capture interface to internal dummy or tap interfaces and then have multiple instances of the single threaded process running using those as input. Each instance of the single threaded process requires complete network flows (i.e. matching tuple - TCP or UDP). I don't care about the traffic after it will hit the dummy interfaces - it's then dropped so I don't need to worry about return paths or anything related.

After some reading, I think that flow based hashing and tc-mirred redirect action might be the solution I am looking for.

So, I've got this far:

# Set up the interfaces  PHYS="eth1"  INT_COUNT=4 # Derived from number of CPU cores  # Create some dummy interfaces  for i in {1..$INT_COUNT}     do ip link add dummy$i type dummy     ip link set dummy$i up  done    # Create the qdisc  tc qdisc add dev $PHY ingress    # Filter based on flow tuple. Classes are created automatically due to divisor  tc filter add dev $PHYS parent ffff: handle 1 \     flow hash keys src,dst,proto,proto-src,proto-dst \     divisor $INT_COUNT    # Now somehow apply the redirect action to the created classes  # for each of the dummy interfaces  for i in {1..$INT_COUNT}     do tc filter add dev $PHYS parent ffff: protocol ip u32 match u32 0 0 \           action mirred egress redirect dev dummy$i  done  

It's that last part I cannot seem to understand. The classes are created automatically with the divisor statement in the filtering, As divisor is using modulo of the hash I should get classes that I can reference.

How can I say "class "N", go to dummyN"?

I've done a lot of reading on this but I think there's some key part that I'm not understanding. I'm thinking I need flowid in the action but am not sure.

Any tips or suggestions on this would be greatly appreciated.

Ubuntu server as both a private NAT router and a public router

Posted: 29 Dec 2021 05:09 AM PST

I am attempting to do something I've never tried before. I have a Dell PowerEdge 540 that I've installed Ubuntu on. I got a Comcast Business line installed, and have the following physical network configuration:

Ubuntu Server 20.04.3 LTS    enp10s0f0:   (10gb SFP+ Add-on card)  IPv4: 5*.2*.204.214  Subnet: 255.255.255.252  Gateway: 5*.2*.204.213    enp10s0f1:   (10gb SFP+ Add-on card)  IPv4: 5*.2*.205.1  Subnet: 255.255.255.0    eno1:        (1gb copper ethernet)  IPv4: 10.0.0.0  Subnet: 255.0.0.0  

This is the output of ip addr:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever      inet6 ::1/128 scope host         valid_lft forever preferred_lft forever  2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000      link/ether 00:1e:c9:b5:b2:3c brd ff:ff:ff:ff:ff:ff      inet 10.0.0.1/8 brd 10.255.255.255 scope global eno1         valid_lft forever preferred_lft forever      inet6 ****::***:c9ff:feb5:b23c/64 scope link         valid_lft forever preferred_lft forever  3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000      link/ether 00:1e:c9:b5:b2:3e brd ff:ff:ff:ff:ff:ff  4: enp12s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000      link/ether 00:10:18:0d:a3:7e brd ff:ff:ff:ff:ff:ff  5: enp10s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000      link/ether f4:e9:d4:92:9d:d0 brd ff:ff:ff:ff:ff:ff      inet 5*.2*.204.214/30 brd 5*.2*.204.215 scope global enp10s0f0         valid_lft forever preferred_lft forever      inet6 fe80::f6e9:d4ff:fe92:9dd0/64 scope link         valid_lft forever preferred_lft forever  6: enp10s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000      link/ether f4:e9:d4:92:9d:d2 brd ff:ff:ff:ff:ff:ff      inet 5*.2*.205.1/24 brd 5*.2*.205.255 scope global enp10s0f1         valid_lft forever preferred_lft forever      inet6 ****::****:d4ff:fe92:9dd2/64 scope link         valid_lft forever preferred_lft forever  

I have the following ip routes configured:

default via 5*.2*.204.213 dev enp10s0f0 proto static  10.0.0.0/8 dev eno1 proto kernel scope link src 10.0.0.1  5*.2*.204.212/30 dev enp10s0f0 proto kernel scope link src 5*.2*.204.214  5*.2*.205.0/24 via 5*.2*.205.1 dev enp10s0f1  

And the following saved in my /etc/rc.local file:

#!/bin/sh -e    iptables --flush  iptables --table nat --flush  iptables --delete-chain  iptables --table nat --delete-chain  iptables -F  iptables -X    ###################### Enable Nat ############################  sudo echo "1" > /proc/sys/net/ipv4/ip_forward    #################### Enable Nat for internal ###########################  iptables -t nat -A POSTROUTING -o enp10s0f0 -j MASQUERADE  #iptables -t nat -A POSTROUTING -o enp12s0f1 -j MASQUERADE    ################### Enable Log ########################  #iptables -A OUTPUT -j LOG  #iptables -A INPUT -j LOG  #iptables -A FORWARD -j LOG    ############# Allow by default #####################  iptables -P INPUT ACCEPT  iptables -P OUTPUT ACCEPT  iptables -P FORWARD ACCEPT    ############################ Allow Incoming connection ############################  iptables -A INPUT -m conntrack --ctstate "ESTABLISHED,RELATED" -j ACCEPT    ################################## Allow outgoing connection ######################  iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT    ######################## allow internal to external  iptables -A FORWARD -i eno1 -m state "ESTABLISHED,RELATED" -j ACCEPT  #iptables -A FORWARD -i eno1 -o enp12s0f1 -m state "ESTABLISHED,RELATED" -j ACCEPT  ###iptables -A FORWARD -i eno1 -o enp10s0f1 -j ACCEPT  iptables -A FORWARD -i eno1 -o enp10s0f1 -j ACCEPT  #iptables -A FORWARD -i eno1 -o enp12s0f0 -j ACCEPT  #iptables -A FORWARD -i enp10s0f1 -o enp10s0f0 -j ACCEPT  #iptables -A FORWARD -i enp10s0f0 -o enp12s0f1 -j ACCEPT  exit 0  

A network diagram: Network diagram, linked since I can't yet post images

As you will probably notice, I've spent a lot of time reading online what the best way to approach this is, and much of this is commented out. The way I figure, I only need NAT on the eno1 interface. But, since it's linux, do I need iptables to do non-NAT routing? I tried disabling it, and the public network wouldn't work at all.

Even when I get one thing to work, it ends up breaking something else. It's been years since I took a Cisco class, and I'm not used to feeling so ignorant. I'm really hoping someone can help me figure out the best way to implement this, or possibly spot any screw-ups I've made that's causing the error.

File Server Cluster with shared storage - options for storage

Posted: 29 Dec 2021 04:48 AM PST

We would like to build two-node File Server Cluster with Shared Storage on VM. The issues is that our vmware set up doesn't support disk sharing. File server cluster must be available all the time of course.

What other options do we have? I consider:

  • DFS - not a solution as both nodes can write at the same time in case of failover (known as a split brain issue)

- Storage replica (not sure here but sounds like an option?) - don't think it can switch automatically once one site goes down

- Storage Space (it requires shared storage anyway?)

I could use Storage outside of vmware (passthrough disk storage attached directly to a virtual server directly). It means the separate LUN must be created only for those 2 servers in cluster, so it sound like a lot of work.

Deploying a JAVA-FX application

Posted: 29 Dec 2021 04:29 AM PST

This is probably a very rookie question, but I have never actually deployed any of the applications I have created, so please bare with me.

I have created a simple JavaFX application. How do I go about hosting the application so that any PC on my network can access it. I also need to be able to update the application, and all PCs on the network running the application will update to match. Note that I have access to spare PC's which (I assume) can be utililsed as a server?

Thanks in advance

Block websites for my VPN users

Posted: 29 Dec 2021 04:08 AM PST

I have strongswan running fine, I need to block some bad websites by it's domains from being visiting by VPN users, I tried many methods but no luck as redirect traffic from vpn to proxy server like squid but I discovered that forwarded traffic to squid it done by it's website IP not domain name so this technique not succeeded.

maybe this is not strongswan business but any idea will be welcomed.

thanks in advance

Proftpd issue with filenames

Posted: 29 Dec 2021 04:08 AM PST

I'm setting up a proftpd service on a Debian 11 server. I want to be able to connect the to FTP with local users (for administrators) and AD user. I joined the Debian server to my AD using realmd and it's working fine for ssh connections. Note: domain user's home is on a NFS share.

When I try to connect to the FTP with my Filezilla client and my local user it's working fine and filenames are correct. But my issue is when I connect with my AD user, the filenames are like : users;UNIX.mode=0666;UNIX.owner=978115167;UNIX.ownername=myuser; file.txt

I'm not even able to interact with this files (transfer, rename, delete) because I get a 550 error :

Status: Deleting "/users;UNIX.mode=0666;UNIX.owner=978115167;UNIX.ownername=myuser; file.txt"  13:01:44    Command:    DELE users;UNIX.mode=0666;UNIX.owner=978115167;UNIX.ownername=myuser; file.txt  13:01:44    Response:   550 users;UNIX.mode=0666;UNIX.owner=978115167;UNIX.ownername=myuser; file.txt: Aucun fichier ou dossier de ce type  

Same problem with WinSCP, but no problem with powershell ftp command for example. I tried clear ftp (port 21), implicit ftps and I got the same "error". However no problem using Proftpd's SFTP mod (because it's ssh I guess).

Here's my config file :

Include /etc/proftpd/modules.conf  ServerName  "ftp.domain.com"  UseIPv6 off  User    proftpd  Group   nogroup    Port    21    TransferLog /var/log/proftpd/xferlog  SystemLog   /var/log/proftpd/proftpd.log    #  ServerType  standalone  DeferWelcome    off    MultilineRFC2228    on  DefaultServer   on  ShowSymlinks    on    TimeoutNoTransfer   600  TimeoutStalled  600  TimeoutIdle 1200    DisplayLogin    welcome.msg  DisplayChdir    .message true  ListOptions     "-l"    DenyFilter  \*.*/  #    <Global>      DefaultRoot ~      Umask   000 000      CreateHome  on 777      PassivePorts    20000 20200      AllowOverwrite  on        ######      #TLS#      ######      TLSEngine   off      TLSRSACertificateFile   /etc/ssl/certs/domain.com.crt      TLSRSACertificateKeyFile    /etc/ssl/private/domain.com.key      TLSCACertificateFile    /etc/ssl/certs/ca.crt      TLSOptions NoSessionReuseRequired      TLSLog  /var/log/proftpd/tls.log      TLSProtocol TLSv1.2      TLSVerifyClient off      TLSRequired on  </Global>    ######  #FTPS#  ######  <VirtualHost 0.0.0.0>      Port    990      TLSEngine   on      TLSOptions  UseImplicitSSL      ListOptions     "-l"  </VirtualHost>    ######  #SFTP#  ######  <IfModule mod_sftp.c>      <VirtualHost 0.0.0.0>          Port    2222          SFTPEngine  on          SFTPLog /var/log/proftpd/sftp.log          SFTPCompression delayed          SFTPAuthMethods password          SFTPHostKey     /etc/proftpd/ssh_host_rsa_key          SFTPHostKey     /etc/proftpd/ssh_host_ecdsa_key          SFTPHostKey     /etc/proftpd/ssh_host_ed25519_key      </VirtualHost>  </IfModule>  

And in modules.conf, mod_tls and mod_sftp are activated.

I've done some research but didn't find problem like mine. Before this I tried to use mod_ldap to log my AD users but it didn't work as I remember correctly.

Thank you for your help.

Do I have to enter the public IP into the HOME_NET variable?

Posted: 29 Dec 2021 05:14 AM PST

Do I have to enter the public IP of eth0 as HOME_NET in the suricata.yaml?

vars:    # more specific is better for alert accuracy and performance    address-groups:      HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"  

My understanding is that only private addresses belong in the variable HOME_NET

Apache Reverse Proxy redirecting to server gives too many redirects error

Posted: 29 Dec 2021 05:30 AM PST

I am trying to configure a reverse proxy to my backend server.

This is my previous configuration which is working.

Define REMOTE_ADDR proxyserver.domain.com    <VirtualHost *:443>      ProxyRequests Off      KeepAlive On      ProxyPreserveHost Off        RewriteEngine On        RewriteCond %{HTTP:Upgrade} =websocket      RewriteRule /prefix/(.*) wss://${REMOTE_ADDR}/prefix/$1 [P,L]      RewriteCond %{HTTP:Upgrade} !=websocket      RewriteRule /prefix/(.*) https://${REMOTE_ADDR}/prefix/$1 [P,L]        ProxyPass / https://${REMOTE_ADDR}/prefix/ connectiontimeout=5 timeout=300      ProxyPassReverse / https://${REMOTE_ADDR}/prefix/   </VirtualHost>  

This is my new configuration.

I want to redirect to:- proxyserver.domain.com/prefix/sen/app/9d12ponf12-2awf2-wafa/sheet/219uaw9dw-waf2/state/analysis

Note: The Remote address will always be the same. I have given an example but this is what it looks like in real as well.

I haven't included SSL information as that is not relevant to my problem.

Define REMOTE_ADDR proxyserver.domain.com    <VirtualHost *:443>      ProxyRequests Off      KeepAlive On      ProxyPreserveHost Off        RewriteEngine On        RewriteCond %{HTTP:Upgrade} =websocket      RewriteRule /prefix/sen/app/9d12ponf12-2awf2-wafa/sheet/219uaw9dw-waf2/state/analysis(.*) wss://${REMOTE_ADDR}/prefix/sen/app/9d12ponf12-2awf2-wafa/sheet/219uaw9dw-waf2/state/analysis$1 [P,L]      RewriteCond %{HTTP:Upgrade} !=websocket      RewriteRule /prefix/sen/app/9d12ponf12-2awf2-wafa/sheet/219uaw9dw-waf2/state/analysis(.*) https://${REMOTE_ADDR}/prefix/sen/app/9d12ponf12-2awf2-wafa/sheet/219uaw9dw-waf2/state/analysis$1 [P,L]        ProxyPass / https://${REMOTE_ADDR}/prefix/sen/app/9d12ponf12-2awf2-wafa/sheet/219uaw9dw-waf2/state/analysis connectiontimeout=5 timeout=300        ProxyPassReverse / https://${REMOTE_ADDR}/prefix/sen/app/9d12ponf12-2awf2-wafa/sheet/219uaw9dw-waf2/state/analysis  </VirtualHost>  

It gives 'Too many redirects error'.

I am now stuck and can't figure how to setup rules for a URL like my REMOTE_ADDR.

Any help is appreciated.

Edit- Provided my previous working configuration and also updated my current configuration to show the relation between them.

I've written /analysis$1 and not /analysis/$1 because when I try to access my server directly, some dynamic strings are appended to it, for e.g. /analysis?example

Active Directory migrations and profile security translation (something's going wrong)

Posted: 29 Dec 2021 03:10 AM PST

This is a general post not seeking a technical resolution to a precise problem. I just want to warn industry colleagues. My career focus has been on AD for 20 years. The precise niche I concentrate on is Migrations and Consolidation projects. I currently work at an organization where I'm migrating 4 domains into one larger one. We've had no end of issues. I've been dealing with a host of challenges for 6 straight months. I've never seen anything like this before.

It seems that in 2021, the tried and trusted (15 year old methods) for migrating from one domain to another are failing at the user profile migration (translation) stage. If you are familiar with tools such as ADMT or Quest Migration Manager for AD, you will be familiar with the security translation wizard/agent whose job is to scour through each ACL on each and every file/folder to ensure that the TARGET domain security principal is added and given identical permissions to the SOURCE domain security principal. Well, it seems that in the latest Windows 10 release (and probably several before that), there are files/folders that the security translation tool is simply not able to modify the security for. These are mainly related to Office365 Apps profile folders. The result is your users end up with profiles that either half translated or completely corrupted. Office 365 apps do not launch correctly meaning you have to reconfigure every single Office app for all impacted users. Something you want to avoid if you have thousands to migrate.

In addition to all of this, TPM (Trust Platform Module), your onprem identity and your cloud identity combine together to create a security layer that cannot be security translated by the traditional migration tools. Basically, they lock out any other user account from accessing your O365 apps profile data even if that account has full rights to the profile\AppData folders.

It's not 100% consistent, but over 500 profile migrations I have seen it 75-80% of the time (could be build/Office app specific). The only way out of this situation is to give users a brand new profile. So folks, next time you perform a domain migration with profile security translation and something is going wrong, it's not just you! Hundreds of people are reporting this issue with no clear direction from Microsoft. Quest are blaming "environmental" issues. I think Microsoft's New Age Developers have lost all concept of domain migrations. They are building security models without any thought towards keeping the user profile "portable". A user profile has always been something you can assign to a new user account, but not anymore?

A point of note also is that MS ADMT does not officially support Windows 10 or Windows Server 2016/2019 for that matter.

App contains exposed Google Cloud Platform (GCP) API keys

Posted: 29 Dec 2021 03:02 AM PST

I am getting "Leaked GCP API Keys" error in my google play console. I have followed this article https://support.google.com/faqs/answer/9287711 . And my key is already restricted but still i am getting this error.

DKIM for subdomain in Cloudflare

Posted: 29 Dec 2021 05:43 AM PST

I have example.com configured cPanel and I do not use it as mail server. I have created new subdomain in Cloudflare called mailer.example.com and installed Virtualmin on new Ubuntu server with hostname as mailer.example.com.

I have setup MX record for mailer.example.com in Cloudflare. It is working fine.

Now I have generated DKIM for my host called mailer.example.com.

Its selector is 202111.

Now I have added TXT record in Cloudflare:

Name = 202111._domainkey.mailer  

and

Content = "<DKIM value given by Virtualmin>"  

But in mail-tester.com and MxToolbox, I am getting that no DKIM record found for my domain as well in DNS lookup, I can not see any text record for my domain called mailer.example.com.

I think I am missing something but not getting idea about it. Let me know if someone can help me for same.

In rsync the "bwlimit" parameter is explicitly ignored, how to fix this

Posted: 29 Dec 2021 04:52 AM PST

I have rsyncd version 3.1.2 listening on port 873 and I want to limit the bandwidth with which my files are downloaded by a client connecting to me. I added a bwlimit both in the config file:

uid = nobody  gid = nogroup    [data]  path = /var/data/files/  comment = Repo  read only = yes  bwlimit = 10  

and tried it on the daemon config file

[Unit]  Description=fast remote file copy program daemon  ConditionPathExists=/etc/rsyncd.conf    [Service]  ExecStart=/usr/bin/rsync --daemon --no-detach --bwlimit=10    [Install]  WantedBy=multi-user.target  

In either case when my client connects with rsync to download the file, the parameter gets ignored per logs:

Dec 29 08:27:12 pubserver systemd[1]: Started fast remote file copy program daemon.  Dec 29 08:27:12 pubserver rsyncd[32533]: rsyncd version 3.1.2 starting, listening on port 873  Dec 29 08:29:42 pubserver rsyncd[1534]: Unknown Parameter encountered: "bwlimit"  Dec 29 08:29:42 pubserver rsyncd[1534]: IGNORING unknown parameter "bwlimit"  Dec 29 08:29:42 pubserver rsyncd[1534]: connect from workspace (10.155.120.1)  Dec 29 08:29:42 pubserver rsyncd[1534]: rsync on repo/ from workspace (10.155.120.1)  Dec 29 08:29:42 pubserver rsyncd[1534]: building file list  Dec 29 08:29:44 pubserver rsyncd[1534]: sent 17567 bytes  received 72 bytes  total size 16787953  

Is there a way to make this limit work? Alternatively, how to slow down/throttle rsync connections if bwlimit is no longer supported?

No Access to Cisco ESA from different VLAN

Posted: 29 Dec 2021 04:50 AM PST

I am in the process to migrate to separate Vlans from a single 10.1.0.0/16 subnet on VLAN1

In the existing /16 subnet is our Cisco Mail Security (ESA).

In a new Vlan Segment for clients (10.101.10.0/24, VLAN6 ) I can do pretty much everything but access the ESA. No ping and also no access via HTTP(s). Other servers and services are fully accessible like from VLAN1

The Cisco support said there is no issue on the config for the ESA.

The network is fully Cisco.

Vlan config on Coreswitch:

show run interface vlan 1  interface Vlan 1  ip address 10.1.0.253 255.255.0.0  end    show run interface vlan 6  !  interface Vlan 6  description LAN-Clients  ip address 10.101.10.253 255.255.255.0  ip helper-address 10.1.30.84  no ip route-cache  end  

Network

The FW is a Cisco ASA 5508-X

the problem also applies from VLAN8 test Virtual Machines on same hypervisor. The management of the Cisco ASA is externally managed.

This is a ping test from Coreswitch:

CiscoCORE#ping 10.1.30.188 Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.1.30.188, timeout is 2 seconds: !!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms

CiscoCORE#ping 10.1.30.188 source vlan8 Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.1.30.188, timeout is 2 seconds: Packet sent with a source address of 10.8.0.253 .....

Success rate is 0 percent (0/5)

where could be the issue?

Update: thanks to the comment of @Tero Kilkanen I added some infos and tests. I did not think of a possible problem on ASA side yet, but it may be the point to look

Playbook containing unsafe variable values breaks after upgrading from ansible 2.10 to ansible 2.12 (from ansible 5)

Posted: 29 Dec 2021 05:28 AM PST

I have upgraded my ansible setup from ansible 2.10 to ansible 2.12 (from ansible 5) using homebrew on Mac OS. Since then a playbook has stopped producing working results.

- name: Assemble consul cluster    become: true    hosts: consul_nodes    roles:      - role: consul        vars:          consul_version: "1.11.1"          consul_install_upgrade: true          consul_group_name: "consul_nodes"          consul_addresses:            dns: !unsafe '169.254.1.1 127.0.0.1 {{  GetPrivateIP }} {{ GetInterfaceIP  \"docker0\" }}'             http: !unsafe '169.254.1.1 {{  GetPrivateIP }} {{ GetInterfaceIP  \"docker0\" }}'             https: !unsafe '127.0.0.1 {{  GetPrivateIP }}'             grpc: "127.0.0.1"          consul_client_address: "169.254.1.1"           consul_node_role: server          consul_bootstrap_expect_value: 3          consul_bootstrap_expect: true  

The playbook contains unsafe variable values that should not be templated as their value should be passed through to a JSON configuration file written to disk.

It was working fine before upgrading to ansible 2.12

ansible [core 2.12.1]    config file = None    configured module search path = ['/Users/martin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']    ansible python module location = /usr/local/Cellar/ansible/5.1.0/libexec/lib/python3.10/site-packages/ansible    ansible collection location = /Users/martin/.ansible/collections:/usr/share/ansible/collections    executable location = /usr/local/bin/ansible    python version = 3.10.1 (main, Dec  6 2021, 23:20:29) [Clang 13.0.0 (clang-1300.0.29.3)]    jinja version = 3.0.3    libyaml = True  

Now the whole JSON that the playbook is creating is invalid. Quotes (") are escaped as (").

I suspect that the problem is related to the unsafe variables as with the working ansible version 2.10 I got the same bad result when not using !unsafe. This may be just a coincidence.

Access router admin page with static IP

Posted: 29 Dec 2021 05:36 AM PST

I have a static IP and can access my servers from the outside world by port mapping. For e.g.: A service running on the 192.168.1.98:8080, can be accessed by going to {Static_IP}:8080, if I have mapped 8080 of the 192.168.1.98 to 8080 external port.

The IP of my router is 192.168.1.1, now, when I want to access the router admin page, I map 192.168.1.1:80 to port 80 on the static IP(external port), but I cannot access the router admin page from the outside world. How can I do this?

My router is Huawei HG630a.

PS: I have to use such an old DSL router as I need static IP and the ISP that provides static IP doesn't have fiber poles laid here. Other ISPs that have fiber don't provide static IP.

how to run a pod from kubernetes yaml file with custom infra image

Posted: 29 Dec 2021 03:26 AM PST

I know this might be a easy one for experts, but I am unable to find the solution.

i am trying to run a pod from a yaml file with podman

i am using

[root@paasrv1 ~]# podman version  Version:      3.3.1  API Version:  3.3.1  Go Version:   go1.16.7  Built:        Tue Sep 21 08:41:42 2021  OS/Arch:      linux/amd64  

NOTE: My server can't connect to internet.

I can manually run pods with custom infra images from local registry like this:

[root@paasrv1 ~]# podman  pod create --name test-pod -p 8080:80  --infra-image localhost/pause:3.3  d648df2c3a2e3cd905b397ee52ef607f39021d824d7f837668bd23f7403816c1  

This works fine!

But when I run pods from kubernetes yaml file, I can't find a way to specify custom infra image as it's trying to download infra image from redhat registry and failing with error below!

Question 1:

[root@paasrv1 ~]# podman  play kube web_cn_pod_1.yml  WARN[0001] failed, retrying in 1s ... (1/3). Error: Source image rejected: Get "https://access.redhat.com/webassets/docker/content/sigstore/ubi8/pause@sha256=166b9092bb1e9c367b643ccbe4af2653c4706d8a304a63dfa95670147660bc71/signature-1": read tcp 10.56.142.174:48356->2.16.186.176:443: read: connection reset by peer  WARN[0003] failed, retrying in 1s ... (2/3). Error: Source image rejected: Get "https://access.redhat.com/webassets/docker/content/sigstore/ubi8/pause@sha256=166b9092bb1e9c367b643ccbe4af2653c4706d8a304a63dfa95670147660bc71/signature-1": read tcp 10.56.142.174:48378->2.16.186.176:443: read: connection reset by peer  WARN[0004] failed, retrying in 1s ... (3/3). Error: Source image rejected: Get "https://access.redhat.com/webassets/docker/content/sigstore/ubi8/pause@sha256=166b9092bb1e9c367b643ccbe4af2653c4706d8a304a63dfa95670147660bc71/signature-1": read tcp 10.56.142.174:44554->2.16.186.162:443: read: connection reset by peer  ERRO[0006] Error freeing pod lock after failed creation: no such file or directory  Error: error adding Infra Container: error pulling infra-container image: Source image rejected: Get "https://access.redhat.com/webassets/docker/content/sigstore/ubi8/pause@sha256=166b9092bb1e9c367b643ccbe4af2653c4706d8a304a63dfa95670147660bc71/signature-1": read tcp 10.56.142.174:48408->2.16.186.176:443: read: connection reset by peer  

I don't see a --infra-image command option for podman play kube command

i tried the following unsuccessfully

1) I replaced pause image paths in /etc/containers/registries.conf.d/001-rhel-shortnames.conf with local pause image still no good

[root@paasrv1 ~]# grep pause /etc/containers/registries.conf.d/*  /etc/containers/registries.conf.d/001-rhel-shortnames.conf:"rhel8/pause" = "localhost/pause"  /etc/containers/registries.conf.d/001-rhel-shortnames.conf:"ubi8/pause" = "localhost/pause"  

2) i added infra image path in my yaml file explicitly

spec:    containers:    - name: webcnpod1-infra      image: localhost/pause:3.3      imagePullPolicy: IfNotPresent  

still getting the same error

Question 2:

How to generate a yaml file to include infra image info/paths, yaml file generated with below method doesn't have infra image specifications?

podman generate kube test-pod -f test-pod.yml  

"Failed to get shell PTY: Protocol error" for an nspawn container with systemd inside

Posted: 29 Dec 2021 04:23 AM PST

I create containers with

# debootstrap --variant=minbase --include=systemd,iproute2 buster "/var/lib/machines/test"  

And then I run it with:

# systemd-nspawn --machine test  

Just for testing purposes. The container boot, when I try to access it with machinectl I get the error:

# machinectl shell test  Failed to get shell PTY: Protocol error  

After some googling, I realized that there was an old bug that generated this behavior, but this was fixed in systemd v226, and /usr/bin/systemd --version on my debian shows me the version 246.

I can get into the container using the "non systemd container" method, but my container runs systemd, I should be able to use machinectl.

Zabbix - Alert if any files in folder is older than 1hr

Posted: 29 Dec 2021 03:06 AM PST

I have a folder which holds files that are to be processed. Once they are processed, they are then moved to another directory.

I'm currently trying to figure out a way to monitor for files inside this directory and alert if there are any files that are older than 1hr. i.e. They have not been processed yet.

In Zabbix, you can use vfs.file.time item but this only monitors 1 file and you have to specify the name of the file as well. The names inside the folder I'm monitoring are irrelevant.

What would be the cleanest way to achieve this?

memory cache is too high and going to use swap

Posted: 29 Dec 2021 04:03 AM PST

i have a centos server with 32 g RAM and the state of it, is (free -m):

              total       used       free     shared    buffers     cached   Mem:         32071      31488        583          0        244      19329   -/+ buffers/cache:      11914      20157   Swap:        17399        287      17112  

the cached size is growth (between every restart app and clear cache)

after 5 hours that i post my question the memory status is :

             total       used       free     shared    buffers     cached  Mem:         32071      31850        221          0        194      20124  -/+ buffers/cache:      11530      20541  Swap:        17399        299      17100  

my java options is :

-Xms12g -Xmx12g -XX:MaxNewSize=6g -XX:NewSize=6g -XX:+UseParallelOldGC -XX:+UseParallelGC -XX:+UseTLAB -XX:MaxTenuringThreshold=15 -XX:+DisableExplicitGC  

as you see, cache size is too high and in the high load time on my server, the swap is used and the server is too slow (Unlike https://www.linuxatemyram.com/ , the memory is full and swap is used and my app is too slow)

i used java for service.

what can i do?

Rewrite rule for nginx (Opencart)

Posted: 29 Dec 2021 03:06 AM PST

Opencart has this URL structure:

http://example.com/index.php?route=common/home  http://example.com/index.php?route=account/register  http://example.com/index.php?route=checkout/cart  http://example.com/index.php?route=checkout/checkout  

... and I want to get rid of the string from index.php?route= until the first /, the regex for that is index.php\?route\=[^\/]*\/, so the desired URL would be, for example, http://example.com/checkout.

I tried:

location @opencart {      rewrite index.php\?route\=[^\/]*\/ / last;      rewrite ^/(.+)$ /index.php?_route_=$1 last;  }  

and also tried:

location / {    if ($query_string ~ "^route=common/home$"){      rewrite ^/index\.php$ http://www.example.com? redirect;    }  }  

... but no luck so far, I still see route=common/home in the URLs.

This is my current Nginx configuration:

location / {    try_files $uri @opencart;  }  location @opencart {      rewrite ^/(.+)$ /index.php?_route_=$1 last;  }  location ~ \.php$ {    include snippets/fastcgi-php.conf;    fastcgi_pass unix:/run/php/php7.0-fpm.sock;  }  location ~* \.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ {      deny all;  }  location ~ /\.ht {    deny all;  }  location /image/data {      autoindex on;  }  location = /favicon.ico {    log_not_found off;    access_log off;  }  location = /robots.txt {    allow all;    log_not_found off;    access_log off;  }  location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {    expires max;    access_log off;  }  

KeepAlived on different subnets

Posted: 29 Dec 2021 05:34 AM PST

I am trying to setup keepalived on ESXi based setup where 2 physical box have ESXi installed and each one having a node which works as load-balancer using HAProxy. Now in order to achieve high availability I want to use KeepAlived so both HAProxy instance can share virtual-ip and I can point physical-ip address to virtual-ip address. Challenge with my implementation is that it has 2 subnets.

HAProxy on subnet A: 1.1.10.101 HAProxy on subnet B: 1.2.10.101

Now how when I am trying to assign virtual-ip 1.1.10.201 on both instance then it is pointing to local instance on both Servers.

I am using CentOS 7.x with HAProxy and KeepAlived, primarily to load-balance HTTP traffic and possibly for database too.

I am not posting config file as question itself is very simple but if required I can do that.

How to reset and persist the hostname and FQDN of an WIdows Azure Centos instance?

Posted: 29 Dec 2021 05:07 AM PST

How does one reset a hostname and domain name (FQDN) to localhost.localdomain on a Centos 6.2 Azure Instance? I've tried editing /etc/sysconfig/network, /etc/sysctl.conf, setting the hostname manually, but whatever I do, cannot get FQDN (i.e. the output of hostname -f command to localhost.localdomain.

Is there any specific way of persisting an FQDN such as this one on Centos?

Thanks

IIS 7 virtual directory 404 error

Posted: 29 Dec 2021 04:03 AM PST

I have set up a virtual directory called application under the default website. Inside this, i have a web applcation running. When i browse 80 from IIS and log into the homepage is fine, but then when i click onto go to another menu (subdirectory) i keep getting 404. I have created the necessary virtual directories and checked permissions on the folder and app pool. the iis log shows the following

2012-09-04 09:54:08 ::1 GET /application/TeamCentral/Common/images/20/h_row.jpg - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 64  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/Images/risk_32x32.png - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 26  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/Survey/Images/survey_32x32.png - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 28  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/common/images/20/logout.gif - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 35  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/favicon.ico - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 17  2012-09-04 09:54:08 ::1 GET /application/images/arrowdown.gif - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 404 0 2 2  *****2012-09-04 09:54:15 ::1 GET /TeamCentral/Auditors/HomePage.aspx - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 404 0 0 2141*****  

Its the last line thats the problem.

Possible for linux bridge to intercept traffic?

Posted: 29 Dec 2021 05:07 AM PST

I have a linux machine setup as a bridge between a client and a server;

brctl addbr0  brctl addif br0 eth1  brctl addif br0 eth2  ifconfig eth1 0.0.0.0  ifconfig eth2 0.0.0.0  ip link set br0 up  

I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables.

Here is the rest of my setup:

//set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this  ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP    //set iptables to forward this traffic to my app listening on port 8080  iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1  iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1    //once the flows are marked, have them delivered locally via loopback interface  ip rule add fwmark 1/1 table 1  ip route add local 0.0.0.0/0 dev lo table 1    //enable ip packet forwarding  echo 1 > /proc/sys/net/ipv4/ip_forward  

However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables.

Secondly, are there any other alternatives I should investigate?

Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

How do you start/stop IIS 7 app pool from cmd line if there are spaces in the app pool name?

Posted: 29 Dec 2021 05:45 AM PST

http://technet.microsoft.com/en-us/library/cc732742(WS.10).aspx

The above URL describes how to start/stop an IIS 7 app pool. However, I have spaces in my app pool name. Double-quotes doesn't work. Ideas?

C:\Windows>C:\Windows\System32\inetsrv\appcmd stop apppool /apppool.name: My Ap  p Services  Failed to process input: The parameter 'App' must begin with a / or - (HRESULT=8  0070057).    C:\Windows>C:\Windows\System32\inetsrv\appcmd stop apppool /apppool.name: "My A  pp Services"  ERROR ( message:The attribute "apppool.name" is not supported in the current com  mand usage. )  

No comments:

Post a Comment