Sunday, October 17, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How to add the normal routing information of a router to BIRD?

Posted: 17 Oct 2021 09:56 PM PDT

Suppose the following network layout:

                         R1:                                   R2:  10.1.1.0/24 <--- 10.1.1.1, 192.168.1.1 <----------> 192.168.1.2, 10.1.2.1 ---> 10.1.2.0/24   

BIRD is installed on both R1 and R2. All information about the network topology is automatically given. It was my understanding that BIRD would automatically redistribute this information so that all stations can connect. But it does not seem as straight forward: R1 and R2 both automatically create "dynamic" routes for their respective subnets but they do not get handled automatically.

The device protocol does not import/export routes. The docs say about the direct protocol:

[...] Although there are some use cases that use the direct protocol (like abusing eBGP as an IGP routing protocol), in most cases it is not needed to have these device routes in BIRD routing table and to use the direct protocol. [...]

I thought the kernel protocol would automatically import these routes because they are part of the kernel routing table. But the documentation states:

Unfortunately, there is one thing that makes the routing table synchronization a bit more complicated. In the kernel routing table there are also device routes for directly connected networks. These routes are usually managed by OS itself (as a part of IP address configuration) and we don't want to touch that. They are completely ignored during the scan of the kernel tables and also the export of device routes from BIRD tables to kernel routing tables is restricted to prevent accidental interference.

So nobody (no protocol) wants to be responsible for distributing the very routes which would make the two networks connect. What's left is static but they I would need to recreate the whole connectivity of a router in the bird config file, something I thought OSPF over BIRD would do for me. Is this what I am supposed to do?

How should the config files for R1 and R2 look like?

router id 192.168.1.1;  protocol device {    scan time 10;  }  protocol direct {    interface "*"; # should I use this?  }  protocol kernel {    learn;    export all;    import all;    device routes true; # OR SHALL I USE THIS?  }  # I would like to avoid doing this:  #protocol static {  #  export all;  #  route 10.1.1.0/24 via 192.168.1.1;  #}  protocol ospf {    import all;    export all;    area 0 {      interface "eth0", "eth1" {        cost 10; hello 10; transmit 2; wait 5; dead 40;        type broadcast;        authentication cryptographic;        password "1234567890";      };    };  }  

And:

router id 192.168.1.2;  protocol device {    scan time 10;  }  protocol direct {    interface "*"; # should I use this?  }  protocol kernel {    learn;    export all;    import all;    device routes true; # OR SHALL I USE THIS?  }  # I would like to avoid doing this:  #protocol static {  #  export all;  #  route 10.1.2.0/24 via 192.168.1.2;  #}  protocol ospf {    import all;    export all;    area 0 {      interface "eth0", "eth1" {        cost 10; hello 10; transmit 2; wait 5; dead 40;        type broadcast;        authentication cryptographic;        password "1234567890";      };    };  }  

What are the various other services that a domain controller provides apart from primary ones ( authentication and authorization )

Posted: 17 Oct 2021 09:36 PM PDT

With the help of domain controller we can,

  • authenticate and authorize network objects( computer, printer, server etc- )
  • Apply GPO either on computer lever or user level or domain level or others.
  • Resources sharing like files, folders etc-
  • Can create managed service accounts
  • can create trust, domain, forest
  • can have groups: security and distribution
  • Sites and subnets
  • Control over replication
  • FSMO roles on DC's
  • can have RDC (root domain) , ADC, CDC (child domain) , TDC (tree domain), RODC types of DC's

Apart from this, are there any other things that a domain controller does or any other feature does it have?

How to IMPORT complete DNS text Record to Windows Server 2016?

Posted: 17 Oct 2021 09:46 PM PDT

I have reinstalled a Windows 2016 server. I am trying to re-establish my DNS records which I backed up prior to the NEW install. When my DNS records import it ONLY shows the SOA record and has NO details included (eg NS1 or NS2), especially considering it knows the domain from its name you supply during the process. All of the DNS exports had ALL the DNS data but NONE has been READ back into the new server.

I used the DNSCMD method to load, after the Wizard failed to read the records properly. Neither method works. The DNS is accessed using the elevated admin permissions. The processes work fine without error, but NO DNS records.

The exact same issue is in both Forward and Reverse Lookup zones. Why is it so difficult to IMPORT a text file that has ALL the information???

It looks like none of this has changed since windows server was invented and it still cannot simply do the most basic and critical things you would expect from a web server in 2021.

Thankyou for your time, if you provide an answer to this issue.

Directory redirection issue with nginx set as reverse proxy

Posted: 17 Oct 2021 08:45 PM PDT

I have configured the server through Reverse Foloxy as follows:

Nginx reverse proxy(SSL Termination) - Varnish cache - Nginx web server(8080 port)

However, it has the following problems: For example, if you go to https://www.example.com/static (this is an example only, your domain is not my site), you will be redirected to http://www.example.com:8080/static/ . It is the same when accessing not only staic but also other directories. I am wondering how to do something like nginx.conf etc to solve this problem.

Cannot push all traffic through Wireguard tunnel on Ubuntu

Posted: 17 Oct 2021 08:38 PM PDT

On server,

[Interface]  Address = 10.13.13.1  ListenPort = 51820  PrivateKey = <...>  PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE    [Peer]  # peer1  PublicKey = <...>  AllowedIPs = 10.13.13.2/32  # AllowedIPs = 0.0.0.0/0  

On client,

[Interface]  Address = 10.13.13.2  PrivateKey = <...>  ListenPort = 51820  DNS = 8.8.8.8    [Peer]  PublicKey = <...>  Endpoint = <...>:51820  AllowedIPs = 0.0.0.0/0  

The server is running inside a docker, the client is running on Ubuntu 18.04. I'm not being able to send all the traffic through the tunnel. If I bring up the wg0 interface on the client and try to connect to a website, it doesn't work. However, ping 8.8.8.8 works. Any idea what is going on?

When brought up, `wg-quick` executes the following command on the client:

# wg-quick up wg1  [#] ip link add wg1 type wireguard  [#] wg setconf wg1 /dev/fd/63  [#] ip -4 address add 10.13.13.2 dev wg1  [#] ip link set mtu 1420 up dev wg1  [#] resolvconf -a tun.wg1 -m 0 -x  [#] wg set wg1 fwmark 51820  [#] ip -4 route add 0.0.0.0/0 dev wg1 table 51820  [#] ip -4 rule add not fwmark 51820 table 51820  [#] ip -4 rule add table main suppress_prefixlength 0  [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1  [#] iptables-restore -n  

How to understand bit allocation in GSM spec?

Posted: 17 Oct 2021 08:36 PM PDT

I am reading 3GPP TS 26.445, and come across following sentenses:

The CT bits are allocated as 1 bit to differentiate active 2.8 kbps (PPP or NELP) frames from any other 2.8 kbps frames (such as SID frame with payload header) and the remaining 2 bits are used to represent NB PPP, WB PPP, NB NELP and WB NELP frames.

But the spec doesn't say whether it is 0 or 1 to denote "active 2.8 kbps (PPP or NELP) frames", neither whether 00 to denote "NB PPP". So in convention, the 0 should refer the first mentioned item, i.e., "active 2.8 kbps (PPP or NELP) frames"? and "NB PPP, WB PPP, NB NELP and WB NELP frames" should be 00, 01, 10, 11? But I can't find the related documents.

Anyone can advise? Thanks very much!

When we inplement the recaptcha enterprise in Salesforce Marketing Cloud cloudpages, we found we can't use the service account to do the auth

Posted: 17 Oct 2021 08:23 PM PDT

When we inplement the recaptcha enterprise in Salesforce Marketing Cloud cloudpages, we found we can't use the service account to do the auth2.0 authorization. Do we need use the API KEY method? If yes, we see the document with API KEY call still need "Note: This API request requires an authorization token from the Cloud SDK, which is generated by the gcloud auth application-default print-access-token command. Ensure you have set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path for your service account private key file." How can we use the ssjs or javascript code to generate the oauth token to call the api?

Maybe the most import point is:

  1. Whether salesforce marketing cloud suppor service account auth( maybe not)?
  2. If yes, how to do the implement with ssjs or javascript
  3. If no, whether we need use the api_key to do the auth?
  4. If use the api_key to do the auth, whetehr need the auth token(oauth2.0), can we have some sample code to reference with ssjs or javascript or ampscript?

We use https://jwt.io/ to generate the token with public key and private key, but get the unauth error. jwt postman error

Docker on Ubuntu Server (Rasberry Pi) failed to create endpoint on network bridge, operation not supported

Posted: 17 Oct 2021 07:28 PM PDT

I am using Ubuntu 21.10 on a Rasberry Pi 4 (aarch64) and when I try to run a Docker (using version 20.10.7) container it returns the following error message:

docker: Error response from daemon: failed to create endpoint goofy_hypatia on network bridge: failed to add the host (veth3da4a58) <=> sandbox (veth987ce17) pair interfaces: operation not supported.  ERRO[0000] error waiting for container: context canceled   

I have tried the following:

Thanks for any help.

Redirect DNS server IPs on Unifi UDM-Pro using iptables

Posted: 17 Oct 2021 07:19 PM PDT

I'm using a Unifi UDM Pro as a gateway for 2 VLANs:

  • Main LAN (interface: br0, subnet: 192.168.1.1/24)
  • IoT Devices VLAN (interface: br3, subnet: 192.168.3.1/24)

Each has its own local DNS (Adguard Home) server (192.168.1.52 and 192.168.3.52 respectively). For each subnet, I want to prevent clients from bypassing the local DNS server assigned via DHCP. In order to do this, I SSH into the UDM Pro and execute these commands:

  iptables -t nat -A PREROUTING -i br0 ! -s 192.168.1.52 ! -d 192.168.1.52 -p tcp --dport 53 -j DNAT --to 192.168.1.52  iptables -t nat -A PREROUTING -i br0 ! -s 192.168.1.52 ! -d 192.168.1.52 -p udp --dport 53 -j DNAT --to 192.168.1.52    iptables -t nat -A PREROUTING -i br3 ! -s 192.168.3.52 ! -d 192.168.3.52 -p tcp --dport 53 -j DNAT --to 192.168.3.52  iptables -t nat -A PREROUTING -i br3 ! -s 192.168.3.52 ! -d 192.168.3.52 -p udp --dport 53 -j DNAT --to 192.168.3.52    iptables -t nat -A POSTROUTING -p tcp --dport 53 -j MASQUERADE  iptables -t nat -A POSTROUTING -p udp --dport 53 -j MASQUERADE  

I test these using two main methods: dig and via WLAN devices (e.g. iPad):

Using the dig method, I test first a direct DNS query and then one to a Google DNS server. I run both commands on the physical host for my DNS server (which is a member of every VLAN via the Debian vlan package):

  1. dig linux.org '@192.168.3.52' -b '192.168.3.52'
  2. dig linux.org '@8.8.8.8' -b '192.168.3.52'

The first command above works fine. The second one gives me a time out. I expect the second one to still work, except to be routed through 192.168.3.52.

If I run the same dig commands above but on the main LAN, both work fine and I can see both queries on my local DNS server.

I'm not sure why VLAN 3 doesn't work in the redirect case, but my main LAN does. Can someone help me understand why this isn't working and show me a working solution?

What happens on the CPU when we press CTRL+C to interrupt a program

Posted: 17 Oct 2021 06:58 PM PDT

I've read some answers about program interrupts when pressing CTRL+C (running c++ code [1] just for example), but I would like to know more about what happens at the CPU and OS level. I wanted to understand if the CPU reads the keyboard command and sends the instruction to interrupt the program to the OS?

Code 1:

#include <stdio.h>  int main ( ) {     for ( ; ; )        puts ("running");     return 0;  }  

GCP doesn't allow me to create a new project even if no other project is active

Posted: 17 Oct 2021 08:12 PM PDT

I'm trying out GCP and I have run into this issue. I shut down all active projects and now I want to create a new one but it says I have reached the quota. There are no active projects, all of the projects are scheduled for deletion. I can restore an empty project (or any other project pending deletion) and it will work, but I cannot create a new one through the GUI or through the CLI. How can that be solved?

Can it be used with OS Red Hat v8 normally?

Posted: 17 Oct 2021 06:55 PM PDT

From the following site, I do not confirm that gcsfuse has been tested on rhel-8, can it be used normally?

https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md#centos-and-red-hat-latest-releases

Upgrade Apache Tomcat 8.5.x to 8.5.72

Posted: 17 Oct 2021 06:45 PM PDT

I need to upgrade a couple of instances of Tomcat 8.5.x to the latest of version 8 (i.e. 8.5.72), on Linux.

I was just wondering if I need to install the latest version to co-exist with my existing version or replace the existing installation altogether and apply the old configuration to the new installation?

Could you please provide the required steps and relevant documentation?

Thanks in advance

Can you run a Minecraft server using two or more machines?

Posted: 17 Oct 2021 05:47 PM PDT

Can you run one Minecraft server using two or more pcs? Would that be a cluster? If so, how can someone do that? I've also heard you can use muliple machines to run one vm. Would that be an option?

starting out w/ rdiff-backup, having permissions issues

Posted: 17 Oct 2021 05:28 PM PDT

I have 2 machines - SERVER and BACKUP. On SERVER I have a script that backs up a few directories and databases, ending up in /var/local/backup with permissions intact (ownership is root and www-data on all of the files).

I'd like to use rdiff-backup on BACKUP to retrieve the contents of /var/local/backup on SERVER and sync it to, for example, /var/local/backup on BACKUP.

I have a user on both machines, USER1, and can ssh from BACKUP into SERVER as USER1; but USER1 cannot read the contents of /var/local/backup on SERVER nor write to /var/local/backup locally on BACKUP.

I would strongly prefer to not allow root access via ssh and I'd prefer not to chown/ chgrp the entirety of the files placed in /var/local/backup on SERVER.

My first thought was to add USER1 to the www-data group but that strikes me as possibly unwise from a security standpoint and doesn't address access to files owned by root.

I am at a loss and am beginning to suspect that there is an elegant answer out there. I would appreciate it if someone could point me toward it.

Exchange. How to forward mail, not save it

Posted: 17 Oct 2021 07:33 PM PDT

We are migrating mail from one of our domains to the exchange (123123.com). We created users in AD, created mailboxes for them and copied mail from Google there. Also the domain was added to accepted.

Now other users of other domains from the exchange are trying to write a letter to the addresses of the domain that we are migrating. The exchange puts the received letter to itself, but I need it to be forwarded to Google servers.

How to make the exchange send a letter for the domain further to Google, but not save it?

I tried to make a Send connector for the domain, I tried to change the UPN and SMTP for users, I tried to disable mailboxes - it was unsuccessful.

Implementing CRM Features in Micrsoft 365 Exchange Online

Posted: 17 Oct 2021 06:47 PM PDT

my Customer wants to migrate from Tobit David to Microsoft 365. Into David he uses an Feature named "Dv Relatations" where incommung and outgooging emails from all Useres are copied into an seperate archive sorted by the Reciever(incl Out- and In Folder).

What Tools can i using to give my customer this function in Microsoft 365? Do you have any ressources, that can help my with this task?

Thanks for your help.

unable to ssh into local qemu instance via port forward

Posted: 17 Oct 2021 07:06 PM PDT

I am trying to build a custom Ubuntu (ISO built from bionic - 18.04.2) qcow2 image via packer. This fails in the step where packer tries to SSH to the instance via port forward. I can see from VNC that the instance spins up fine, and I can login with the given ID manually on the console, but packer is unable to ssh.

I tried to ssh, ssh form host gets stuck for a long time and errors out:

ssh -vvv -p 2226 admin@127.0.0.1  debug1: identity file /home/ani/.ssh/id_ed25519-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4  

Though the port is up:

$ sudo lsof -i:2226  COMMAND     PID USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME  qemu-syst 31185 root   14u  IPv4 50059412      0t0  TCP *:2226 (LISTEN)  

SSH server is running in the instance, and the user is a valid one. What else do I need to check to ensure SSH can happen via the host ? On the same host, I am able to make Ubuntu Trusty (14.04) qcow2. So, I am not sure if some additional qemu command line arguments need to be passed (for port forwarding to work correctly) Or some sshd configs need to be changed on Ubuntu bionic (18.04)!

This is the qemu command line:

/usr/bin/qemu-system-x86_64 -cpu host -smp 4 -m 8192M -boot once=d -name ubuntu-bionic-custom-0.1.qcow2 -drive file=output/ubuntu-bionic-custom-0.1.qcow2,if=virtio,cache=writeback,discard=ignore,format=qcow2 -serial file:serial.out -device e1000,netdev=user.0 -machine type=pc,accel=kvm -netdev user,id=user.0,hostfwd=tcp::2226-:22 -cdrom /home/ani/ubuntu-bionic-custom-0.1.iso -vnc 127.0.0.1:83

version:

$ /usr/bin/qemu-system-x86_64 --version   QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.4)  Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers  

--EDIT--

$ route -n   Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         10.192.0.1      0.0.0.0         UG    0      0        0 br0  10.0.3.0        0.0.0.0         255.255.255.0   U     0      0        0 lxcbr0  10.192.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br0  192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0    $ ip netns ls   $  

smbclient NT_STATUS_BAD_NETWORK_NAME with server OS SpinStream2

Posted: 17 Oct 2021 05:02 PM PDT

I'm trying to use Samba smbclient to connect to a file server managed by my technology partner, and I'm consistently getting this error message about tree connect failed. The exact details have been anonymized.

$ smbclient -W DOMAIN -U USER //192.168.0.1/ShareName 'PASSWORD'    Domain=[PARTNER] OS=[SpinStream2] Server=[Windows 2000 Lan Manager]  tree connect failed: NT_STATUS_BAD_NETWORK_NAME  

(As far as I know, SpinStream2 describes NetApp OnTAP, up to 8.3.2)

I've tried a variety of flags and permutations already. I'm fairly sure authentication is working fine, because I get a different error message if I change the domain, user, or password. I've also tried connecting using the NT server name (ie. //SERVER/ShareName) combined with the --ip-address flag, but that produces the same NT_STATUS_BAD_NETWORK_NAME error.

Is there some other combination of options or flags I need to use?

Google-authenticator with openvpn - AUTH: Received control message: AUTH_FAILED

Posted: 17 Oct 2021 09:03 PM PDT

I'm trying to set up MFA with Google authenticator for my OpenVPN setup on Ubuntu 16.04. Now OpenVPN works fine until I bring Google Authenticator into the mix.

My server.conf file reads as follows:

port 1194  proto udp  dev tun  ca ca.crt  cert server.crt  key server.key  dh dh2048.pem  server 10.0.0.0 255.255.255.0  ifconfig-pool-persist ipp.txt  push "redirect-gateway def1 bypass-dhcp"  client-to-client  keepalive 10 120  tls-auth ta.key 0  key-direction 0  cipher AES-128-CBC  auth SHA256  comp-lzo  user nobody  group nogroup  persist-key  persist-tun  status openvpn-status.log  log-append  openvpn.log  verb 3  plugin /usr/lib/openvpn/openvpn-plugin-auth-pam.so openvpn  reneg-sec 0  

My client.conf reads as follows:

client  dev tun  proto udp  remote 10.1.0.2 1194  resolv-retry infinite  nobind  user nobody  group nogroup  persist-key  persist-tun  remote-cert-tls server  comp-lzo  verb 3  cipher AES-128-CBC  auth SHA256  key-direction 1  script-security 2  up /etc/openvpn/update-resolv-conf  down /etc/openvpn/update-resolv-conf  auth-user-pass  auth-nocache  reneg-sec 0  

Also, in /etc/pam.d I have cloned common-accounts to create an openvpn file with the following lines:

account requisite                       pam_deny.so  account required                        pam_permit.so  auth requisite pam_google_authenticator.so  secret=/home/${USER}/.google_authenticator  

Now I have created the necessary user profiles for each client connecting to the VPN server, say client1, client2 and client3 on Ubuntu. Now, consider client1 is trying to connect to the VPN server. I am logged in as client1 on the client side system, and try to connect to the VPN Server.

I get the following ,

Enter Auth Username: ******  Enter Auth Password: ************* ( Password for local user profile? + OTP)  

After this point, I get

[server] Peer Connection Initiated with [AF_INET]10.1.0.2:1194  SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)  AUTH: Received control message: AUTH_FAILED  TCP/UDP: Closing socket  SIGTERM[soft,auth-failure] received, process exiting  

Now I wasn't sure why I was getting the AUTH failed error. I had seen many different ways in which the username/password combination could be input during the process of connecting to the VPN server.

    Method 1 - username ; password (local account password + OTP)      Method 2 - username ; password (local account password) +                 separate prompt section which asks for Google authenticator OTP      Method 3 - username ; OTP  

I was never prompted with a separate Google Authenticator prompt asking me for OTP separately. So I tried method 1 and tried method 2 expecting for a Google authenticator prompt which never showed up.

Question 1: What is the correct way to use Google Authenticator login credentials. Am I missing something here which might be why I do not get prompted for the OTP separately?

Another thing that I observed is that ,

sudo systemctl status openvpn@server  

gives different results for the two login methods above.

I got these status messages while trying different combination of password + OTP combinations.

openvpn(pam_google_authenticator)[15305]: Invalid verification code  openvpn(pam_google_authenticator)[15305]: Did not receive verification code from user  openvpn(pam_google_authenticator)[15305]: Failed to compute location of secret file  

Question 2: Can someone explain to me what these status messages mean in terms of my login inputs.

Question 3: How can I get the MFA up and running.

FYI I used libpam-google-authenticator. I did not follow the method which warranted using makefile and adding configuration parameters for pam.

Thanks!

Standard user login to Bitvise SSH server not working

Posted: 17 Oct 2021 08:02 PM PDT

I have a Windows 2008 R2 box with Bitvise SSH Server 6.47 running. The Windows box is stand alone. It is not part of a domain. My issue is that Bitvise will not allow a "Standard user" to login via SSH. The Bitvise activity log says "Login to Windows account failed". If I change the user to an administrator then login works fine. Note that whether the user is a "Standard user" or an "Administrator" user I can login via normal RDP. I have added the user to the Remote Desktop Users group.

So basically it seems like Bitvise is allowing administrator users to login via SSH but not standard users. What setting do I need to change to allow standard user login via SSH? Thanks.

Nginx Redirect all JPEG URL to single JPEG

Posted: 17 Oct 2021 10:02 PM PDT

There are two scenario that I'm trying to achieve.

Scenario A : If client request URL that contains .jpeg or .jpg file, redirect the user to a single .jpg file that are on the server in this case myimage.jpg

Scenario B : If client request URL that contains /abc/ directory, redirect the user to other domain through proxy while keeping the URL in tact.

Below is the content of my nginx.conf

http {        server {          listen 80;          root /usr/share/nginx/html;            #Scenario A          location ~* \.(jpg|jpeg){             rewrite ^(.*) http://$server_name/myimage.jpg last;          }            #Scenario B          location ^~ /abc/ {              proxy_pass http://cd.mycontent.com.my;              proxy_redirect localhost http://cd.mycontent.com.my;              proxy_set_header Host $host;              }      }  ......  

Most of it I referred to Nginx redirect to a single file The config does not contain error in /var/log/nginx/error.log but it does not perform as intended to.

Checking if two virtual machines are running on the same host

Posted: 17 Oct 2021 10:02 PM PDT

Is there a way to see if several Virtual Machines are running on the same host? Specifically, I have three VMWare VMs (each running a Ubuntu Server 14.04) and I have tried to compare different pieces of information:

  • dmidecode -s system-serial-number gives different results for each VM
  • lspci returns the same output for each VM
  • cat /proc/cpuinfo returns similar values for two of them and one has a completely different output (notably the "model name" line is different)

This doesn't help me to find determine which ones are running on the same host (if any).

Are there any other way to check?

How should I bridge two networks, given each network has its own subnet & DHCP server?

Posted: 17 Oct 2021 09:08 PM PDT

I would like to join/bridge two different networks, network 1 and network 2:

  1. Network 1: A network consisted of a Linux box (with one ethernet) port and multiple clients (connected via LAN switch). The linux box is acting as DHCP server and it's giving IP to the clients including its own.

  2. Network 2: Other network completely on different subnet and also has router giving IPs through DHCP serving multiple clients.

Please see the network diagram:

enter image description here

My objective is to be able to access the Linux box from Client A & B while keeping the DHCP configurations intact on both network.. so:

  1. Linux Box would still be able to give IP addresses to Client 1 & 2 and retain 192.168.10.10 IP address inside Network 1.
  2. Client A should be able to access internet and communicate with Client B and retain the 123.123.xxx.xxx IP address inside Network 2.

What kind of devices and configurations should I use?

I was thinking of bridging routing those networks using another router with the router's DHCP server turned off. Then I set a static route. Just like this guide: http://kb.linksys.com/Linksys/ukp.aspx?pid=80&vw=1&articleid=17589

However I'm quite green in networking and would like to verify my understanding before investing in a router. I'm not even sure whether a consumer router would be able to do this kind of job.

Could someone help me on this matter? I'd appreciate any kind of comment. Thanks!

Capistrano fails to delete folders/files created by Apache

Posted: 17 Oct 2021 06:01 PM PDT

Problem

Capistrano deploys a web application via SSH using deploy user. Apache/PHP runs under typical www-data user.

Web server is creating cache files and folders at runtime inside the app path. Example:

-rw-r--r-- 1 www-data www-data 71758 Apr 29 14:33 /var/www/site.com/releases/20140429183204/cache/twig/9e/dd/fd353a4ff2520b59144be49f4a6e.php  

Capistrano deploy:cleanup attempts to delete olders releases, which contains theses cache files but fails since user deploy has no write permission on the cache files.

Error reported:

cannot remove `/var/www/site.com/releases/20140429183204/cache/twig/9e/dd/fd353a4ff2520b59144be49f4a6e.php' : Permission denied  

Usual solution, ACL

My usual solution for this was to set deploy in www-data group and www-data in deploy group and set ACLs so new files/folders always get group-write rights.

My current server filesystem doesn't support acl...

Attempted solution, sticky bit

My attempt was to set a sticky bit on the whole app folder. This was attempted while both users are in the other's group.

chmod -R g+rwsx /var/www/site.com  

This works well for new files, but sticky bit doesn't propagate to new folder (which is my problem)

tl;dr

How to set up permissions so Capistrano(via SSH with user deploy) delete files and folders created by Apache with user www-data.

Having trouble getting "Set action to take when logon hours expire" to work

Posted: 17 Oct 2021 08:02 PM PDT

I have a Windows Server 2012 server that allows remote desktop users (sessions are hosted on the server itself). I'm trying to enforce logon hours for these remote desktop users.

I have specified logon hours for a user and confirmed that they work--they aren't allowed to logon when logon hours are disabled. However, they are allowed to continue a session past their logon hours limit if they are already signed in (which is fine, this is the default behavior).

However, when I try to use the Set action to take when logon hours expire option (User Configuration/Administrative Templates/Windows Components/Windows Logon Options/Set action to take when logon hours expire), and set the behavior to "Logoff", nothing happens--the user can continue their session happily. I've tried applying this policy both for the user's group and for the local computer. I've run gpresult for the user and confirmed that the policy is apparently in place.

I also naively tried the "Force logoff when logon hours expire" option, but that apparently doesn't apply to interactive logins (confusing!).

Am I misapplying this setting, or do I need to take some other steps to get it to work? I'd be grateful for any input. Thanks!

EDIT

So, based on the comment from @RobM and other discussions online, it sounds like this policy doesn't really work (at least not as expected). Is there any official MS documentation for this policy (I looked around some online and couldn't find much), or are there any resources that might cover it?

Assuming this policy is not an option, one possible workaround would be a scheduled task to log users off when their logon hours expire. However, each users' hours may be different, so I cannot use a time-of-day trigger. Is there some "logon hours expired" event (e.g. in the event logs) that I could hook into to run the logoff task?

How do I deploy files to Apache Tomcat in a similar fashion to Apache Webserver, ftp

Posted: 17 Oct 2021 09:03 PM PDT

I need to deploy some files to a Tomcat App Server, is it possible to access the root directory of an application, and upload files to a folder?

I have only used Apache WebServer thus far, and I can add files using something like filezilla to upload my website. In this case I just need to upload some files for download.

How can I setup a downloads folder, in tomcat?

Unable to connect QNAP NAS ldap to Domino server

Posted: 17 Oct 2021 07:06 PM PDT

We've just bought a QNAP 419 NAS to the office and for simplicity I'd like to authenticate using LDAP from our Domino server.

qnap ldap auth demands the following settings:

  • Base DN
  • Root DN
  • password
  • User
  • base DN
  • Group base DN

Our Domino server has the hierarchy: O=/

I can't figure out what to put in the above fields except root DN and password.

We have a firewall (Fortigate) with ldap authentication to our Domino server that's is working fine. Here we specify the DN as O= but it does not specify what DN…

I have search for others using this combo but no hits.

Domino server: 8.5.3 QNAP: TS419P II, fw: 3.8.1 Build 20121205

NAT not working after enabling DirectAccess

Posted: 17 Oct 2021 05:02 PM PDT

following test setup is given:

server1 - 1 network card connected to internal network (10.0.0.2/24) + gateway 10.0.0.1

server2 - 2 network cards (1. connected to internal network (10.0.0.1/24) / 2. connected to the internet with static ip address + default gateway is set)

Both servers can ping each other, server2 can ping addresses in the internet.

I installed the "Remote Access" role on server2 with the "Routing" option. Enabled NAT in the RRAS Manager and selected network card 2 as internet access card.

-> server1 can now ping addresses in the internet via NAT on server2.

But as soon as i run the DirectAccess configuration manager an enable DirectAccess+VPN on server2 NAT stops working. The configuration in the RRAS Manager still exists.

Any idea why?

The goal is to have an internal network where each server can access the internet via NAT and one server acts as VPN/DirectAccess server+NAT Router.

.ftpaccess file and Pure-FTPD

Posted: 17 Oct 2021 06:01 PM PDT

I've been looking for a way to have specific users who have access my FTP to have read-only permission on particular directories. I came across some articles on creating .ftpaccess files (which I've read are similar to .htaccess files) to create customized configurations for specific directories and sub-directories.

After reading everything I could find about .ftpaccess files through Google and attempting to create said files I've had no luck.

Does anyone know the syntax that is needed to get these files to work? And is there a particular setting that I need to have enabled to enable these files? I've looked through the conf files but found nothing.

No comments:

Post a Comment