Saturday, May 8, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


LDAP users and groups schema

Posted: 08 May 2021 09:37 PM PDT

I'm trying to integrate my company software with LDAP so i setup an OpenLDAP instance so as to test. Have some questions regarding the LDAP data i've loaded so wondering if anyone can help guide.

below is the entries in LDAP, my question is: henryaccount has primary group id 7101, but i wanna add this account to other groups so i added memberUid to the other group i have. i have an app connecting to LDAP trying to authenticate. it seems like the user is found but they cant associate any roles/group with it. is my schema wrong?

# extended LDIF  #  # LDAPv3  # base <dc=website,dc=com> with scope subtree  # filter: (objectclass=*)  # requesting: ALL  #    # website.com  dn: dc=website,dc=com  objectClass: dcObject  objectClass: organization  o: website.com  dc: website    # users, website.com  dn: ou=users,dc=website,dc=com  objectClass: organizationalUnit  objectClass: top  ou: users    # groups, website.com  dn: ou=groups,dc=website,dc=com  objectClass: organizationalUnit  objectClass: top  ou: groups    # henryaccount, users, website.com  dn: uid=henryaccount,ou=users,dc=website,dc=com  objectClass: top  objectClass: account  objectClass: posixAccount  cn: henryaccount  uid: henryaccount  uidNumber: 1001  gidNumber: 7101  homeDirectory: /home/henryaccount  userPassword:: UEBzc3cwcmQ=    # ORG_poc, groups, website.com  dn: cn=ORG_poc,ou=groups,dc=website,dc=com  objectClass: top  objectClass: posixGroup  gidNumber: 7102  cn: ORG_poc  memberUid: henryaccount    # ORG_default, groups, website.com  dn: cn=ORG_default,ou=groups,dc=website,dc=com  objectClass: top  objectClass: posixGroup  gidNumber: 7101  cn: ORG_default  memberUid: henryaccount  

Can find RSA private key for uploading my SSL certificate to Google App Engine

Posted: 08 May 2021 09:23 PM PDT

Right now I am trying to upload a SSL certificate from GoDaddy so That I am able to enable HTTPS for my custom domain name for the website hosted on the app. Whenever, I try to upload the SSL certificate I am able to use the PEM file that came with the certificate bundle works well enough, but I don't seem to have the RSA private key I can use that came with the bundle. I tried to generate a RSA private key using Open SSL but it didn't seem to generate a key I can add to the app. I just need to if I need to get an RSA private key or is there a work around to this problem?

How can I use bindfs with macfuse to create a bind mount on MacOS? (or any other way of presenting a tree at another path on the filesystem)

Posted: 08 May 2021 09:10 PM PDT

I'm attempting to replicate a linux file tree for the MacOS systems on my network. I've come across macFuse to expose bindfs, but I'm unclear on what I would need to do for this to work, since it errors as if the target path doesn't exist.

Here are my steps so far...

user@MacBookPro13 /Volumes % ls -ltriah                  total 64         2 drwxr-xr-x  20 root  wheel   640B  1 Jan  2020 ..         2 drwxr-xr-x   1 user  staff   4.0K  2 Nov  2020 BOOTCAMP  91818328 drwxr-xr-x   3 root  wheel    96B 27 Apr 07:03 com.apple.TimeMachine.localsnapshots  94509180 drwxr-xr-x   3 root  wheel    96B  6 May 08:30 .timemachine  95005605 lrwxr-xr-x   1 root  wheel     1B  9 May 12:18 Macintosh HD -> /         2 drwx------   1 user  staff    16K  9 May 13:16 data3         2 drwx------   1 user  staff    16K  9 May 13:16 prod3  95030953 drwxrwxrwx   2 root  wheel    64B  9 May 13:28 onsite_prod       133 drwxr-xr-x   9 root  wheel   288B  9 May 13:28 .  user@MacBookPro13 /Volumes % sudo mount -t bindfs /Volumes/prod3 /Volumes/onsite_prod  mount: exec /Library/Filesystems/bindfs.fs/Contents/Resources/mount_bindfs for /Volumes/onsite_prod: No such file or directory  mount: /Volumes/onsite_prod failed with 72  

I also tried to bind to a folder in my home dir with a similar result...

user@MacBookPro13 /Volumes % mkdir ~/testdir  user@MacBookPro13 /Volumes % sudo mount -t bindfs /Volumes/prod3 ~/testdir             mount: exec /Library/Filesystems/bindfs.fs/Contents/Resources/mount_bindfs for /Users/user/testdir: No such file or directory  mount: /Users/user/testdir failed with 72  

Thanks if you have any pointers that might allow

iptables -> ip6tables (convert)

Posted: 08 May 2021 08:46 PM PDT

I have the following iptables (ipv4) and need the same on ip6tables (ipv6) for an openVPN-Server:

# Flushing all rules  iptables -F FORWARD  iptables -F INPUT  iptables -F OUTPUT  iptables -X  # Setting default filter policy  iptables -P INPUT DROP  iptables -P OUTPUT DROP  iptables -P FORWARD DROP  # Allow unlimited traffic on loopback  iptables -A INPUT -i lo -j ACCEPT  iptables -A OUTPUT -o lo -j ACCEPT  # Accept inbound TCP packets  iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT  # Allow incoming SSH  iptables -A INPUT -p tcp --dport 22 -m state --state NEW -s 0.0.0.0/0 -j ACCEPT  # Allow incoming OpenVPN  iptables -A INPUT -p udp --dport 1194 -m state --state NEW -s 0.0.0.0/0 -j ACCEPT  #iptables -A INPUT -p tcp --dport 443 -m state --state NEW -s 0.0.0.0/0 -j ACCEPT  # Accept outbound packets  iptables -I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT  # Allow DNS outbound  iptables -A OUTPUT -p udp --dport 53 -m state --state NEW -j ACCEPT  iptables -A OUTPUT -p tcp --dport 53 -m state --state NEW -j ACCEPT  # Allow HTTP outbound  iptables -A OUTPUT -p tcp --dport 80 -m state --state NEW -j ACCEPT  # Allow HTTPS outbound  iptables -A OUTPUT -p tcp --dport 443 -m state --state NEW -j ACCEPT  # Enable NAT for the VPN  iptables -t nat -A POSTROUTING -s 172.16.100.0/24 -o eth0 -j MASQUERADE  # Allow TUN interface connections to OpenVPN server  iptables -A INPUT -i tun0 -j ACCEPT  # Allow TUN interface connections to be forwarded through other interfaces  iptables -A FORWARD -i tun0 -j ACCEPT  iptables -A OUTPUT -o tun0 -j ACCEPT  iptables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT  iptables -A FORWARD -i eth0 -o tun+ -m state --state RELATED,ESTABLISHED -j ACCEPT  # Allow outbound access to all networks on the Internet from the VPN  iptables -A FORWARD -i tun0 -s 172.16.100.0/24 -d 0.0.0.0/0 -j ACCEPT  # Block client-to-client routing on the VPN  iptables -A FORWARD -i tun0 -s 172.16.100.0/24 -d 172.16.100.0/24 -j DROP  

How these command must be look like? I am not familiar with iptables, used ufw in the past, but an openVPN-Server ufw is not very practicable.

Thanks in advance for any help.

chmod or cd to a directory is not working

Posted: 08 May 2021 08:38 PM PDT

I have a file in a directory with root root permissions. But even with root user I am not able to change the permission or cd to that directory, even not able to change the permissions.

$ lsattr AWS/  $ sudo lsattr AWS/  $    
$ sudo chmod 777 AWS/  chmod: changing permissions of 'AWS/': Operation not permitted    
$ stat AWS/    File: 'AWS/'    Size: 4096        Blocks: 8          IO Block: 32768  directory  Device: 30h/48d Inode: 111260170   Links: 4  Access: (0750/drwxr-x---)  Uid: (    0/    root)   Gid: (    0/    root)  Access: 2021-03-15 12:13:55.519597000 -0700  Modify: 2021-03-15 12:13:36.914649000 -0700  Change: 2021-03-15 12:13:36.914649000 -0700   Birth: -  
$ sudo su - root  root@host00:/home/kam# chmod 700 AWS/  chmod: changing permissions of 'AWS/': Operation not permitted  root@host00:/home/kam# chmod -R a+x AWS/  chmod: changing permissions of 'AWS/': Operation not permitted  chmod: cannot read directory 'AWS/': Permission denied  root@host00:/home/kam# chmod -R a+X AWS/  chmod: changing permissions of 'AWS/': Operation not permitted  chmod: cannot read directory 'AWS/': Permission denied  root@host00:/home/kam# ls -l AWS/  ls: cannot open directory 'AWS/': Permission denied  root@host00:/home/kam# ls -ld AWS/  drwxr-x--- 4 root root 4096 Mar 15 12:13 AWS/  root@host00:/home/kam# exit  exit    
root@host00:/home/kam# cd AWS/  -su: cd: AWS/: Permission denied  root@host00:/home/kam#  

How does servers work? [closed]

Posted: 08 May 2021 07:35 PM PDT

I am very new with the concept of servers of the type here. The only servers I really heard of is SQL Servers, and I don't have an idea how they function. I want to use some someday, but I want to know the basic info first.

So how do those servers actually work?

How to make constructor data public

Posted: 08 May 2021 05:15 PM PDT

Looking for some help in getting the data from the "if (munuOption == 1) { " arrays to the "if (munuOption == 3) { "

If the users selects option 3 I would like display the current set of employees already created

    if (munuOption == 3) {           System.out.println(employee.toString());      }  

However I get an error saying employee cannot be resolved, which I understand why, what I cannot figure out since 8am this morning is how to get that same data from option 1 to option 3.

Output:

MENU:  1 - Load Employees Data:  2 - Add New Employee:  3 - Display All Employees:  4 - Retrieve Specific Employee's Data:  5 - Retrieve Employee with salaries based on range:  6 - Exit  1  How many employees do you want to enter:   2  Employee #1:   Enter Employees name:   sfsf sdff  Enter Employees salary:   5345  Enter Employees 5 digit id:   3453  Employee #2:   Enter Employees name:   dfg sgg eg  Enter Employees salary:   34534  Enter Employees 5 digit id:   34553  ID=3453, Salary=5345, Name =sfsf sdff  sfsf sdff  5345  3453  ID=34553, Salary=34534, Name =dfg sgg eg  dfg sgg eg  34534  34553  MENU:  1 - Load Employees Data:  2 - Add New Employee:  3 - Display All Employees:  4 - Retrieve Specific Employee's Data:  5 - Retrieve Employee with salaries based on range:  6 - Exit  

Code:

import java.util.Scanner;    public class EmpData {  public String name;  public int salary;  public int empcode;    public FinalProject(String name , int salary , int empcode ) {      this.name = name;      this.salary = salary;      this.empcode = empcode;  }    public String toString() {      return "ID=" + empcode + ", Salary=" + salary + ", Name =" + name;  }    public static void main(String[] args) {        Scanner EmployeeName = new Scanner(System.in); // Scanner for Employee Name      Scanner Employeeinfo = new Scanner(System.in); // Scanner for Employee info      while(true){        System.out.println("MENU:");      System.out.println("1 - Load Employees Data:");      System.out.println("2 - Add New Employee:");      System.out.println("3 - Display All Employees:");      System.out.println("4 - Retrieve Specific Employee's Data:");      System.out.println("5 - Retrieve Employee with salaries based on range:");      System.out.println("6 - Exit");            int munuOption = Employeeinfo.nextInt();                  if (munuOption == 6) {            System.out.print("Thank you");          Employeeinfo.close();          break;      }        if (munuOption == 1) {           System.out.print("How many employees do you want to enter: \n"); //Prompt for how many employees                    int arrayEmployee = Employeeinfo.nextInt();                    String[] EmployeeNames = new String [arrayEmployee]; // Array for  Student Name          int[] Empsalary = new int[arrayEmployee]; // Array for Grade          int[] Empid = new int[arrayEmployee]; // Array for Grade                    int k = 1; // Fix counter                for(int j = 0; j < EmployeeNames.length; j++) { // Loops to get both Grade input and Student Name                                    System.out.print("Employee #" + k + ": \n");                  System.out.print("Enter Employees name: \n");                  EmployeeNames[j] = EmployeeName.nextLine();                                    System.out.print("Enter Employees salary: \n");                                    Empsalary[j]=Employeeinfo.nextInt();                                    System.out.print("Enter Employees 5 digit id: \n");                  Empid[j]=Employeeinfo.nextInt();                                    k++;              }                    for (int i=0;i<Empsalary.length;i++) { // Loop to list out array                  FinalProject employee = new FinalProject (EmployeeNames[i], Empsalary[i], Empid[i]);                  System.out.println(employee.toString());                  System.out.println(employee);              }      }            if (munuOption == 2) {           System.out.print("Thank you");      }            if (munuOption == 3) {   //System.out.println(employee.toString());      }        }      EmployeeName.close();      Employeeinfo.close();  }  

}

Simulate some process IO with fio or other tool

Posted: 08 May 2021 03:37 PM PDT

Is there any way to track specific process and get stats for disk IO such as queue death, total reading/writing threads, percent of reads/writes and so on?

Main goal is to use all that information to emulate IO activity with fio tool.
Or maybe any other way (tool) to estimate which hardware can be better for specific load?

Of course testing is the best option, but not fully available for me, I cannot buy all possible hardware.
So I have to compare with one which I already have to make some assumptions before buying.

Technically how do curl, ping and other tools get around an NGINX front facing reverse proxy server?

Posted: 08 May 2021 05:52 PM PDT

I'm having trouble understanding how communication occurs on a linux box if a front facing server like NGINX has been installed.

For example this is my setup.

AWS / EC2 linux based instance

NGINX - front facing server

Node.js / Express - upstream server

In this setup I have no problem communicating past NGINX with ping, curl, Node Package Manager and other tools even without setting an http_proxy environment variable. By default, without any added configuration these tools know how to get past NGINX and onto the internet.

In this common setup why don't I have to set up http_proxy or something similar to allow outside communication? Once NGINX is installed doesn't all traffic go through it?

AWS Glacier and Ransomware

Posted: 08 May 2021 02:33 PM PDT

I'm trying to understand the structure of how AWS Glacier works because I have a problem.

Problem: I have a NAS that backs up to Glacier about once a week. About two weeks ago the NAS got infected with ransomeware so if I retrieved the data now I would just be obtaining useless encrypted files.

Question: Is it possible to download folders/data from a archive/inventory that occurred a few weeks ago as opposed to the latest inventory version?

Thanks for any help given.

ssh_config host not connecting, connects otherwise, why?

Posted: 08 May 2021 02:25 PM PDT

# Read more about SSH config files: https://linux.die.net/man/5/ssh_config  Host domain.com      HostName [IP]      Port 2216      User centos  Host domain-two.com      HostName [IP]      Port 2216      User centos      

So I have something like this setup, but when trying to ssh domain-two.com it times out:

OpenSSH_for_Windows_7.7p1, LibreSSL 2.6.5  debug1: Reading configuration data C:\\Users\\{{user}}/.ssh/config  debug1: C:\\Users\\{{user}}/.ssh/config line 6: Applying options for domain-two.com  debug3: Failed to open file:C:/ProgramData/ssh/ssh_config error:2  debug2: resolve_canonicalize: hostname [IP] is address  debug2: ssh_connect_direct: needpriv 0  debug1: Connecting to [IP] port 2216.  debug3: finish_connect - ERROR: async io completed with error: 10060, io:0000020694DA5A60  debug1: connect to address [IP] port 2216: Connection timed out  ssh: connect to host [IP] port 2216: Connection timed out  

However doing:

ssh user@ip -p 2116  

works without a hitch. I thought Host was a tag and HostName was what mattered, but for what it's worth domain-two.com doesn't point anywhere and that's the only apparent difference I can tell.

AKS Kubernetes NGINX ingress - 308 Permanent Redirect

Posted: 08 May 2021 02:09 PM PDT

I have a simple Flask application deployed on to a Azure Kubernetes cluster, which has different endpoints. On the root 'mysub.mydomain.com/', it should prints out "Hello, I am working". I am using NGINX ingress controller. I want to redirect all http traffic to https. When I try to navigate to my domain via my browser no response at all. When I try to curl my endpoint I got 308 - Permanent redirect. Detailed output with curl mysub.mydomain.com -v

* Trying 20....  * Connected to mysub.mydomain.com (20...) port 80 (#0)  > GET / HTTP/1.1  > Host: mysub.mydomain.com  > User-Agent: curl/7.71.1  > Accept: */*  >   * Mark bundle as not supporting multiuse  < HTTP/1.1 308 Permanent Redirect  < Date: Fri, 07 May 2021 23:31:06 GMT  < Content-Type: text/html  < Content-Length: 164  < Connection: keep-alive  < Location: https://mysub.mydomain.com  <   <html>  <head><title>308 Permanent Redirect</title></head>  <body>  <center><h1>308 Permanent Redirect</h1></center>  <hr><center>nginx</center>  </body>  </html>  * Connection #0 to host mysub.mydomain.com left intact  

Certs for my domain are valid, checked them. Certs are valid for the *.mydomain.com.

My ingress.yaml file looks like the following:

apiVersion: extensions/v1beta1  kind: Ingress  metadata:    annotations:      kubernetes.io/ingress.class: nginx      nginx.ingress.kubernetes.io/rewrite-target: /$1      nginx.ingress.kubernetes.io/proxy-body-size: 10M      nginx.ingress.kubernetes.io/proxy-connect-timeout: "9000"      nginx.ingress.kubernetes.io/proxy-read-timeout: "9000"      nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"    name: my-ingress-rule  spec:    rules:      - host: mysub.mydomain.com        http:          paths:            - backend:                serviceName: my-backend-service                servicePort: 80              path: /(.*)    tls:      - hosts:          - mysub.mydomain.com        secretName: my-tls-secret  

When I curl the service in front of my deployment directly, curl 80.0.0.53:80, not through the ingress controller, I got the perfect response with response code 200.

My pod is listening on port 8000, and running kubectl describe pods my-pod I can see its ip. 80.0.0.159:8000

Running kubectl get svc I can see my public-facing LoadBalancer, and my private LoadBalancer which is the service connected to my pod.

my-private-nginx-controller   LoadBalancer   100.0.152.101   80.0.0.53     80:30141/TCP                 my-public-nginx-controller    LoadBalancer   100.0.161.224   20.87.69.12   80:31763/TCP,443:30324/TCP   

After running kubectl describe ingress my-ingress-rule I noticed that the IP address column has a different IP.

Name:             my-ingress-rule  Namespace:        my-namespace  Address:          41.122.112.11  Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)  TLS:    my-tls-secret terminates mysub.mydomain.com  Rules:    Host                Path  Backends    ----                ----  --------    mysub.mydomain.com                          /(.*)   my-private-nginx-controller:80 (80.0.0.159:8000)  

To test the certificates I ran curl -v -k --resolve mysub.mydomain.com:443:20.87.69.12 https://mysub.mydomain.com The response is:

* Added mysub.mydomain.com:443:20.87.69.12 to DNS cache  * Rebuilt URL to: https://mysub.mydomain.com/  * Hostname mysub.mydomain.com was found in DNS cache  *   Trying 20.87.69.12...  * TCP_NODELAY set  * Connected to mysub.mydomain.com (20.87.69.12) port 443 (#0)  * ALPN, offering h2  * ALPN, offering http/1.1  * successfully set certificate verify locations:  *   CAfile: /etc/ssl/certs/ca-certificates.crt    CApath: /etc/ssl/certs  * TLSv1.3 (OUT), TLS handshake, Client hello (1):  * TLSv1.3 (IN), TLS handshake, Server hello (2):  * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):  * TLSv1.3 (IN), TLS handshake, Unknown (8):  * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):  * TLSv1.3 (IN), TLS handshake, Certificate (11):  * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):  * TLSv1.3 (IN), TLS handshake, CERT verify (15):  * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):  * TLSv1.3 (IN), TLS handshake, Finished (20):  * TLSv1.3 (OUT), TLS change cipher, Client hello (1):  * TLSv1.3 (OUT), TLS Unknown, Certificate Status (22):  * TLSv1.3 (OUT), TLS handshake, Finished (20):  * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384  * ALPN, server accepted to use h2  * Server certificate:  *  subject: CN=*.mydomain.com  *  start date: Aug 24 13:16:25 2020 GMT  *  expire date: Aug 25 13:16:25 2022 GMT  *  issuer: C=***; O=**** nv-sa; CN=**** CA - SHA256 - G2  *  SSL certificate verify ok.  * Using HTTP2, server supports multi-use  * Connection state changed (HTTP/2 confirmed)  * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0  * TLSv1.3 (OUT), TLS Unknown, Unknown (23):  * TLSv1.3 (OUT), TLS Unknown, Unknown (23):  * TLSv1.3 (OUT), TLS Unknown, Unknown (23):  * Using Stream ID: 1 (easy handle 0x55edd4f00600)  * TLSv1.3 (OUT), TLS Unknown, Unknown (23):  > GET / HTTP/2  > Host: mysub.mydomain.com  > User-Agent: curl/7.58.0  > Accept: */*  >   * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):  * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):  * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):  * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):  * TLSv1.3 (IN), TLS Unknown, Unknown (23):  * Connection state changed (MAX_CONCURRENT_STREAMS updated)!  * TLSv1.3 (OUT), TLS Unknown, Unknown (23):  * TLSv1.3 (IN), TLS Unknown, Unknown (23):  and it keeps pending..    

Any ideas what causes the problem?

UPDATE If I remove the TLS part from my Ingress rule, everything works as expected.

netsystemsresearch.com on my internal network

Posted: 08 May 2021 06:41 PM PDT

It first started happening with the local network enabled printer. It printed out that netsystemsresearch.com was doing a search of all public networks. I stopped that by disabling outside connections from the printer.

Yesterday I had an expressjs server running locally on my machine (on port 3000), and I got a ping from netsystemsresearch.com again with the same message.

Anybody has experienced something like that? I tried looking up netsystemsresearch.com but didn't find anything useful.

Office 365 In-Place Hold preventing me from deleting user

Posted: 08 May 2021 06:06 PM PDT

I am using AD Connect to synchronize my on-premises Active Directory to our Office 365 tenant. I tried disabling a user from on-premises and then synchronizing to O365. However, it seems to have broken everything because nothing is synchronizing any longer and the user still exists in O365.

When I open the user properties in O365 admin, I see the following error:

Exchange: An unknown error has occurred. Refer to correlation ID: 769ccf2f-bd09-4651-801e-983aaeaace7f;

If I try to run Get-MsolUser I get the following error:

Exchange can't disable the mailbox "ZZZZ.PROD.OUTLOOK.COM/Microsoft Exchange Hosted Organizations/domain.onmicrosoft.com/UserName" because it is on In-Place Hold.

I can't find any In-Place Hold policy active and I can't seem to be able to delete this user. I even tried running Remove-MailUser and got the error:

The operation couldn't be performed because object 'user@domain.com' couldn't be found on 'YYYY.ZZZZ.PROD.OUTLOOK.COM'.

Linux SSSD with two AD Domains

Posted: 08 May 2021 03:06 PM PDT

I Joined my Centos Box to a Windows Active Directory Domain with

realm join --user=DomUser dom2.local  

Without any Problems. The Domain hast a one-way Trust relationship to Dom1. Our Windows Users can:

  • Log-In with Dom1/User to Dom1/Host
  • Log-In with Dom1/User to Dom2/Host
  • Log-In with Dom2/User to Dom2/Host

On our Linux Boxes (in Dom2), only Dom2/Users can Log in. I found some evidence online, that sssd can be configured with two Domains, so i added a Block in the sssd config:

# cat /etc/sssd/sssd.conf   [sssd]  domains = dom1.local, dom2.local  config_file_version = 2  services = nss, pam    [domain/dom1.local]  ad_domain = dom1.local  krb5_realm = DOM1.LOCAL  realmd_tags = manages-system joined-with-samba   cache_credentials = True  id_provider = ad  krb5_store_password_if_offline = True  default_shell = /bin/bash  ldap_id_mapping = True  use_fully_qualified_names = True  fallback_homedir = /home/%u@%d  access_provider = ad  enumerate = True      [domain/dom2.local]  ad_domain = dom2.local  krb5_realm = DOM2.LOCAL  realmd_tags = manages-system joined-with-samba   #cache_credentials = True  cache_credentials = False  id_provider = ad  krb5_store_password_if_offline = True  default_shell = /bin/bash  ldap_id_mapping = True  use_fully_qualified_names = True  fallback_homedir = /home/%u@%d  access_provider = ad  enumerate = True  

Now if i try to log in with a Dom2 User i get the following:

pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=host.dom1.local user=user@dom2.local  pam_sss(sshd:auth): received for user user@dom2.local: 6 (Permission denied)  Failed password for user@dom2.local from 10.10.0.10 port 34442 ssh2  

Has someone succefully configured two AD Domains with sssd? Or any Idea how to do that?

Edit1:

With getent passwd i can see all users from both domains, and both:

id user1@dom1.local  id user2@dom2.local  

work as well.

Forward Between interfaces On Mikrotik

Posted: 08 May 2021 07:02 PM PDT

im having a trouble ive a mikrotik router with 2 interfaces up ( let's call lan1 and lan2 ) Lan 1 has the IP 192.168.100.1 lan 2 192.168.0.32

Lan1 the other side of the wire goes to a cisco wich IP is 192.168.100.20 , and beside that cisco its another network with IP 10.94/16 if i test over the mikrotik with winbox y can reach the cisco AND the other network itself,

now in my network we have the range 192.168.0.0/16 i can ping the lan2 of the mikrotik, but cant reach lan1 or cisco or 10.94 network,

could anyone help me wich filter rules and nat rules should i create to forward the requestest from 192.168.0.0/23 and reach 10.94.0.0/16 ? or the whole traffic coming for LAN2 forward to LAN1?

xl2tp + strongswan ipsec -- xl2tp timeout

Posted: 08 May 2021 03:06 PM PDT

I'm trying to connect to a ipsec/l2tp vpn from a private network behind a nat-router. It works from different windows clients, but from my linux machine (openSuSE 12.3, stronswan 5.1.3, xl2tp 1.3.0) I don't manage to connect. First problem was, that the server seems to handle just IKE v1 protocol. "keyexchange = ikev1" in ipsec.conf solved this issue. Now "ipsec statusall" shows:

Status of IKE charon daemon (strongSwan 5.1.3, Linux 3.16.7-53-desktop, x86_64):    uptime: 6 minutes, since Dec 20 01:08:01 2016    malloc: sbrk 2838528, mmap 0, used 652816, free 2185712    worker threads: 10 of 16 idle, 6/0/0/0 working, job queue: 0/0/0/0, scheduled: 3    loaded plugins: charon curl soup ldap pkcs11 aes des blowfish rc2 sha1 sha2 md4 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl gcrypt af-alg fips-prf gmp agent xcbc cmac hmac ctr ccm gcm attr kernel-netlink resolve socket-default farp stroke smp updown eap-identity eap-sim eap-sim-pcsc eap-aka eap-aka-3gpp2 eap-simaka-pseudonym eap-simaka-reauth eap-md5 eap-gtc eap-mschapv2 eap-dynamic eap-radius eap-tls eap-ttls eap-peap eap-tnc xauth-generic xauth-eap xauth-pam tnc-imc tnc-imv tnc-tnccs tnccs-20 tnccs-11 tnccs-dynamic dhcp certexpire led duplicheck radattr addrblock unity  Listening IP addresses:    client_ip  Connections:      L2TP-PSK:  %any...server_ip  IKEv1      L2TP-PSK:   local:  [client_ip] uses pre-shared key authentication      L2TP-PSK:   remote: [server_ip] uses pre-shared key authentication      L2TP-PSK:   child:  dynamic[udp] === dynamic[udp/l2f] TRANSPORT  Security Associations (1 up, 0 connecting):      L2TP-PSK[1]: ESTABLISHED 6 minutes ago, client_ip[client_ip]...server_ip[server_ip]      L2TP-PSK[1]: IKEv1 SPIs: a505b49c4edac068_i* 829bf572900386be_r, pre-shared key reauthentication in 7 hours      L2TP-PSK[1]: IKE proposal: AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048  

So everything seems fine on the side of ipsec When starting the l2tp protocol with "echo "c connection_name" > /var/run/xl2tpd/l2tp-control" I just see some timeouts in the systemlog:

    xl2tpd[16779]: get_call: allocating new tunnel for host server_ip, port 1701.      xl2tpd[16779]: Connecting to host server_ip, port 1701      xl2tpd[16779]: control_finish: message type is (null)(0).  Tunnel is 0, call is 0.    xl2tpd[16779]: control_finish: sending SCCRQ      xl2tpd[16779]: network_thread: select timeout      ... (5x)      Maximum retries exceeded for tunnel 55245.  Closing.      network_thread: select timeout      ... (5x)      Unable to deliver closing message for tunnel 55245. Destroying anyway.  

Watching the traffic with

tcpdump host server_ip and port l2tp

shows only the following:

12:58:39.221494 IP client_ip.l2f > server_ip.l2f:  l2tp:[TLS](0/0)Ns=0,Nr=0 *MSGTYPE(SCCRQ) *PROTO_VER(1.0) *FRAMING_CAP(AS) *BEARER_CAP() *FIRM_VER(1680) *HOST_NAME(my_site) *VENDOR_NAME(xelerance.com) *ASSND_TUN_ID(49091) *RECV_WIN_SIZE(4)  

repeatet 5 times and later 3 times:

12:58:44.226892 IP client_ip.l2f > server_ip.l2f:  l2tp:[TLS](0/0)Ns=1,Nr=0 *MSGTYPE(StopCCN) *ASSND_TUN_ID(49091) *RESULT_CODE(1/0 Timeout)  

Obviously there is no answer from the server to the l2tp packages. But as said before, it works with several windows clients What could be wrong?

What can I do, to get more information about the l2tp connection?

I switched on all debugging option in the xl2tp.conf already. Here are my conf-files:

ipsec.conf

conn L2TP-PSK          keyexchange = ikev1          authby=secret          auto=start          keying=1          rekey=yes          ikelifetime=8h          keylife=1h          type=transport          left=%any          leftprotoport=udp/%any          right=server_ip          rightprotoport=udp/l2tp  

xl2tp.conf

[global]  access control = yes  auth file = /etc/xl2tpd/l2tp-secrets  debug avp = yes  debug network = yes  debug state = yes  debug tunnel = yes    [lac connection_name]  lns = server-ip  ppp debug = yes  pppoptfile = /etc/ppp/options.xl2tpd.connection_name  length bit = yes  require authentication = yes  require chap = yes  refuse pap = yes  name = my_loginname  

Windows 10 Pro: RDP disconnecting every 10 - 30 seconds

Posted: 08 May 2021 09:05 PM PDT

Just looking for some brainstorming help.

I have a (fully updated) Windows 10 Pro desktop which I regularly connect to using RDP from a Mac running Microsoft Remote Desktop (latest version).

The Windows 10 Pro machine is using a static IP on 192.168.1.0/24 network.

When the Mac is on 192.168.1.0/24 as well, I can stay connected to the Windows 10 Pro machine for hours with no problem.

Sometimes I work from another site on 192.168.2.0/24 network. There is a wireless link between both sites. The network path is something like this:

Internet <- NAT <- Site1: 192.168.1.0/24 -> NAT -> 192.168.3.0/29 <- NAT <- Site2: 192.168.2.0/24

Whenever I try to connect to the Win10 PC at Site1 from the Mac at Site2, I can easily and quickly establish an RDP connection, and I can even use the connection just fine for anywhere from 10 - 60 seconds, and then the screen freezes and I get disconnected from the Win10 PC.

You might say, well maybe I have a problem with my wireless link, but a continuous ping from Site2 to Site1 shows no problems with the connection. Even more telling, I have another RDP server running on a Win10 Pro machine, but it is completely offsite and I access it through the Internet at Site1. In other words, from Site2 through Site1 and then out the Internet, I am accessing another RDP server also running Win10, and I can stay connected to that machine for hours on end.

So what is changing from Site1 to Site2 that is causing me lose RDP connection every time I connect? Is it a NAT problem? The weird thing I really don't understand: if I had some critical configuration or network problem, I shouldn't be able to connect to RDP at all - why is it letting me connect without problems, function without problems for about 30 seconds, and then suddenly disconnect me seemingly without reason? It doesn't make sense.

Disable VM Autostart - XenServer

Posted: 08 May 2021 02:00 PM PDT

After running updates in XenServer 6.5 I noticed that some virtual machines that were recently turned off started turning on again automatically after the server I applied updates to restarted.

When I go to apply updates through xen center I get the following notification and have to disable before I can proceed: enter image description here

I'd like to disable this altogether. I've also disabled high availability temporarily hoping this would do the trick but it has not.

Any suggestions/assistance would be greatly appreciated.

How to reset ufw without disabling it?

Posted: 08 May 2021 04:01 PM PDT

I'd like to reset the ufw settings back to the defaults, apply new settings, and only then reload the firewall. While I'm making the changes I'd like the firewall to keep running with its old settings.

man ufw states:

   reset  Disables and resets firewall to installation  defaults.  Can  also  give  the  --force            option to perform the reset without confirmation.  

So it appears that ufw reset is not the solution because it disables the firewall in addition to reseting to installation defaults.

I know that I can muck around with the ufw config files directly and then ufw reload. Is that the solution or is there a more idiomatic way of using ufw in this case?

ERR_CONNECTION_TIMED_OUT (unless I'm using a proxy)

Posted: 08 May 2021 06:06 PM PDT

I run my own online business as well as managing over a dozen self hosted sites for other people using the wordpress.org. platform. They're all hosted by a small company in the UK and if I do experience any problems the company are usually quick to sort them out. However...

Right now, using Chrome or Safari (on an iMac and on a PC) I'm getting the message ERR_CONNECTION_TIMED_OUT when attempting to login to the wp-admin; or even if I just want to view the sites. It's not the first time this has happened, and I've done all the usual things - cleared the browser cache, double checked the wi-fi connection, used a 'is it down or is it just me' site etc. etc. Btw, the sites are accessible from elsewhere (but this doesn't help me, I live and work out in the sticks.) I've done pings and traceroutes and copied my hosting provider into these (no reply, yet.)

I can access the sites using a proxy (e.g. anonymouse) but can't edit them in this way of course. Anyway, this wouldn't be a great solution, I want to be able to use Chrome or Safari. Anyone any ideas?

Windows Service "System error 5 has occurred. starting service"

Posted: 08 May 2021 05:01 PM PDT

I have a Windows 2012 R2 server which has been happily running a windows service for roughly 3-4 months with various build revisions going into the software.

The server configuration hasn't changed at all, however I have just started seeing the following error when manually trying to start the windows service and doing it from our build system.

System error 5 has occurred. starting service  

The event log is pretty fruitless too:

The <service name> service terminated with the following error: Access is denied.  

As mentioned above the accounts used for this have not changed. I have checked that the service folder has full permissions on it and have even tried running the service under a local account and administrator account. Both of these produce exactly the same error.

Is there any way for me to obtain more information about the problem? Nothing else on the server seems affected.

Why do my Snort logs appear to be empty?

Posted: 08 May 2021 09:05 PM PDT

So I was following this guide on how to install Snort, Barnyard 2 and the like.

I've set up Snort so it would run automatically, by editing the rc.local file:

ifconfig eth1 up    /usr/local/snort/bin/snort -D -u snort -g snort \  -c /usr/local/snort/etc/snort.conf -i eth1  /usr/local/bin/barnyard2 -c /usr/local/snort/etc/barnyard2.conf \  -d /var/log/snort \  -f snort.u2 \  -w /var/log/snort/barnyard2.waldo \  -D  

And I then restarted the computer. Snort was able to run and detect the attack, but the log files (including barnyard2.waldo) remained blank, even if a new log entry was created for each attack.

I'm not sure what went wrong here, since it's supposed to log any attacks and store it in the log directory, right?

Then, I tried changing the parameter to:

    /usr/local/snort/bin/snort -D -b -u snort -g snort \  -c /usr/local/snort/etc/snort.conf -i eth1  

And when I checked the log file, there are two log files, one in u2 and another in tcpdump format, but they're both blank and is approximately 0 bytes.

So I thought I'd run it from the console to see if it would work from there, using this command:

/usr/local/snort/bin/snort -A full -u snort -g snort \    -c /usr/local/snort/etc/snort.conf -i eth1  

and I then checked the log file to see if it would log the attack, and it still doesn't.

Apache2 reverse proxy connections staying persistent, filling ssh channels

Posted: 08 May 2021 05:01 PM PDT

I have a webserver (Amazon Linux EC2 instance running Apache2), let's call it "server A", on which I have set up reverse proxy using:

# (All the appropriate modules are loaded higher up in the conf file)  # ...  ProxyRequests off  ProxyPass /booth5/ http://localhost:8005/  ProxyHTMLURLMap http://localhost:8005 /booth5    <location /booth5/>  ProxyPassReverse /  SetOutputFilter  proxy-html  ProxyHTMLURLMap  /        /booth5/  ProxyHTMLURLMap  /booth5  /booth5  RequestHeader    unset  Accept-Encoding  </location>  

Where localhost:8005 is a forwarded port over an ssh connection to a server sitting behind a firewall.

This setup works well and runs for a while, but after some time server A doesn't send any new requests to the proxied server.

The server connections to the proxied server are staying up:

# netstat -napt | grep 8005  tcp        0      0 127.0.0.1:8005              0.0.0.0:*                   LISTEN      22675/sshd            tcp        1      0 127.0.0.1:38860             127.0.0.1:8005              CLOSE_WAIT      28910/httpd           tcp        1      0 127.0.0.1:39453             127.0.0.1:8005              CLOSE_WAIT  28548/httpd           tcp        1      0 127.0.0.1:44596             127.0.0.1:8005              CLOSE_WAIT  28542/httpd           tcp        1      0 127.0.0.1:38774             127.0.0.1:8005              CLOSE_WAIT  28549/httpd           tcp        1      0 127.0.0.1:39997             127.0.0.1:8005              CLOSE_WAIT  29889/httpd           tcp        1      0 127.0.0.1:39135             127.0.0.1:8005              CLOSE_WAIT  28544/httpd           tcp        0      0 ::1:8005                    :::*                        LISTEN      22675/sshd    

I believe this is "using up" all the channels on the ssh tunnel and I want server A to behave in a way that it sends http requests to the proxied server as necessary, but then clears the connections.

Initially I suspected this was due to Apache on the proxied server doing persistent connections, so I updated the config there to include:

    # Timeout: The number of seconds before receives and sends time out.      # Timeout 300      Timeout 30        # KeepAlive: Whether or not to allow persistent connections (more than      # one request per connection). Set to "Off" to deactivate.      KeepAlive On        # MaxKeepAliveRequests: The maximum number of requests to allow      # during a persistent connection. Set to 0 to allow an unlimited amount.      # We recommend you leave this number high, for maximum performance.      #MaxKeepAliveRequests 100      MaxKeepAliveRequests 6        # KeepAliveTimeout: Number of seconds to wait for the next request from the      # same client on the same connection.      KeepAliveTimeout 5  

I haven't tried setting KeepAlive Off yet. I was trying to get some benefit from short/persistent connections, but they're not closing.

Is Apache config the correct place to solve this? Is it instead part of the ssh config for the tunnel? (config for that can be provided if needed).

Samba group doesn't appear on Network Neighborhood

Posted: 08 May 2021 08:02 PM PDT

I have a samba server (Samba version 3.6.9-151.el6). My ip server has multiple ip address, and it uses dns proxy for name resolution.

I have 2 problem:

  1. Samba share works with IP but not with hostname from Windows Xp.
  2. Samba group doesn't appear on Network Neighborhood

My dns works and I'm able to make name resolution on all my ip address.

Only pc on network 192.168.1.0/24 see samba shared folder, the pc on network 192.168.168.0 and 172.16.0.0 don't see shared folder.

Below is the smb.conf about my request related part:

workgroup = SERVER  server string = ServerXXX Samba Server Version %v  hosts allow = 127. 192.168.1. 192.168.168. 172.16.0.  deadtime = 0  keepalive = 300  lanman auth = yes  client lanman auth = yes  local master = yes  preferred master = no    wins support = yes  dns proxy = yes  

Automate mounting a persistant CIFS drive natively on Windows.

Posted: 08 May 2021 04:01 PM PDT

Trying to create a script to automate mounting CIFS shares as drives on windows 2008/2012 server. The share requires a login (Unfortunately, AD can not be used) and needs to be mounted as a persistent drive that survives reboots.

Windows allows below

net use x: \\10.243.212.19\demo_nas_share /USER:username password /PERSISTENT:YES  

However above won't save credential for next boot. We need to use

net use x: \\10.243.212.19\demo_nas_share /SAVECRED /PERSISTENT:YES  

But this cmd only accepts the login details via a prompt and difficult to call from the script. Not sure if default windows server install has a native tool like 'Expect' to automate this. I like to avoid installing a third party utility.

NOTE: You can not combine /USER and /SAVECRED. This apparently was supported in some older version of windows though.

The other commonly suggested solutions is to put the cmd into startup folder. But I don't want to expose the password in plain text.

Can anyone recommend a native solution ?

MySQL Memory Limit Windows Server 2003

Posted: 08 May 2021 07:02 PM PDT

I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory).

The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory?

So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have:

  1. Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now.

  2. Upgrade to Server 200x 64 bit and MySQL 64 bit.

  3. Switch to a *nix 64 bit server.

If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help.

Thanks

How to create a very simple external FastCGI configuration in apache?

Posted: 08 May 2021 08:02 PM PDT

I have an externally started FastCGI application that listens on socket '/tmp/foo.sock' and a directory of static files in '/srv/static'. Apache has all needed permissions on the socket and the directories.

What I need : All requests starting with '/static' should be handled by apache using the contents of '/srv/static'. All other requests should be handled by the FastCGI application. Here is my current virtual host configuration:

<VirtualHost *:80>          ServerAdmin foo@bar.com          ServerName www.foo.com          ServerAlias foo.com            Alias /static /srv/static            FastCgiExternalServer /* -socket /tmp/foo.sock                ErrorLog /var/log/apache2/error.log            # Possible values include: debug, info, notice, warn, error, crit,          # alert, emerg.          LogLevel warn            CustomLog /var/log/apache2/access.log combined    </VirtualHost>  

Even though this seems simple, its giving me quite the headache. According to http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiExternalServer the first parameter to 'FastCgiExternalServer' should be a 'filename' that when matched will cause apache to delegate the request to the external FastCGI app. What am I missing here?

How do I get the current Unix time in milliseconds in Bash?

Posted: 08 May 2021 04:24 PM PDT

How do I get the current Unix time in milliseconds (i.e number of milliseconds since Unix epoch January 1 1970)?

Remove 1 Disk From 4 Disk RAID 5 Array

Posted: 08 May 2021 08:24 PM PDT

Im using a PERC 3/DC controller to run a RAID 5 array using 4 hard disks. I am hoping to change this to 3 disks in the array and 1 hot spare. Is it possible to remove 1 disk from the array, reconfigure it as a hot spare, then reconfigure the RAID 5 array to use 3 disks WITHOUT loosing any data? I have backups but I would rather just reconfigure it without going through the hassle of restoring data. Thanks!

No comments:

Post a Comment