Thursday, March 25, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


how to testing the tags.?

Posted: 25 Mar 2021 09:49 PM PDT

this to know whether it works on our community platform and how it works. please ignore this ? this to know whether it works on our community platform and how it works. please ignore this ? this to know whether it works on our community platform and how it works. please ignore this ? this to know whether it works on our community platform and how it works. please ignore this ? this to know whether it works on our community platform and how it works. please ignore this ?

AIX Samba user access getpwuid failed

Posted: 25 Mar 2021 09:46 PM PDT

I have installed Samba 4.12.10 via yum in AIX 7.2. I have also installed kerberos package to authenticate samba with kerberos.

My objective is to allow users access of folders/files in AIX from their windows machines.

# yum list installed | grep samba  samba.ppc 4.12.10-2 @AIX_Toolbox_72  samba-client.ppc 4.12.10-2 @AIX_Toolbox_72  samba-common.ppc 4.12.10-2 @AIX_Toolbox_72  samba-devel.ppc 4.12.10-2 @AIX_Toolbox_72  samba-libs.ppc 4.12.10-2 @AIX_Toolbox_72  samba-winbind.ppc 4.12.10-2 @AIX_Toolbox_72  samba-winbind-clients.ppc 4.12.10-2 @AIX_Toolbox_72    # yum list installed | grep winbin  samba-winbind.ppc 4.12.10-2 @AIX_Toolbox_72  samba-winbind-clients.ppc 4.12.10-2 @AIX_Toolbox_72    # yum list installed | grep krb5  krb5-devel.ppc 1.18.3-1 @AIX_Toolbox  krb5-libs.ppc 1.18.3-1 @AIX_Toolbox  krb5-server.ppc 1.18.3-1 @AIX_Toolbox  krb5-server-ldap.ppc 1.18.3-1 @AIX_Toolbox  krb5-workstation.ppc 1.18.3-1 @AIX_Toolbox  

However, when I try to access the AIX server in windows file explorer: \\pc96p9 (pc96p9 is my AIX machine name) It is showing access is denied even through a correct domain username and password is provided.

Then I checked the samba log from /etc/samba/log.10.161.139.74 (10.161.139.74 is the windows machine accessing AIX), I get the following error:

[2021/03/26 12:07:51.353238, 0] ../../source3/auth/token_util.c:567(add_local_groups)  add_local_groups: SID S-1-5-21-2693943023-2014060074-1703039353-34220 -> getpwuid(100000) failed, is nsswitch configured?  [2021/03/26 12:07:51.353328, 3] ../../source3/auth/token_util.c:403(create_local_nt_token_from_info3)  Failed to add local groups  [2021/03/26 12:07:51.353351, 1] ../../source3/auth/auth_generic.c:174(auth3_generate_session_info_pac)  Failed to map kerberos pac to server info (NT_STATUS_NO_SUCH_USER)  [2021/03/26 12:07:51.353424, 3] ../../source3/smbd/smb2_server.c:3280(smbd_smb2_request_error_ex)  smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_ACCESS_DENIED] || at ../../source3/smbd/smb2_sesssetup.c:146  [2021/03/26 12:07:51.354653, 3] ../../source3/smbd/server_exit.c:250(exit_server_common)  Server exit (NT_STATUS_CONNECTION_RESET)  

Here is my /etc/krb5.conf:

[logging]  default = FILE:/var/log/krb5libs.log  kdc = FILE:/var/log/krb5kdc.log  admin_server = FILE:/var/log/kadmind.log    [libdefaults]  default_realm = MY-OA.MY.ORG.HK  dns_lookup_realm = false  dns_lookup_kdc = false  ticket_lifetime = 24h  renew_lifetime = 7d  forwardable = true    [realms]  MY-OA.MY.ORG.HK = {  kdc = MYIFS28.MY-OA.MY.ORG.HK  admin_server = MYIFS28.MY-OA.ORG.HK  }    [domain_realm]  .my.org.hk = MY.ORG.HK  my.org.hk = MY.ORG.HK  

Here is my /etc/samba/smb.conf:

[global]          realm = my-oa.my.org.hk          netbios name = pc96p9          workgroup = MY-OA          realm = MY-OA.MY.ORG.HK          password server = 10.67.1.92          server services = rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate, dns, s3fs          security = ads          idmap uid = 100000-200000          idmap gid = 100000-200000          template homedir = /home/%U          template shell = /usr/bin/bash          winbind use default domain = yes          winbind offline logon = false          winbind enum users = yes          winbind enum groups = yes          domain master = no          local master = no          preferred master = no          socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_SNDBUF=32768 SO_RCVBUF=32768          os level = 0          wins server = 10.67.1.92          encrypt passwords = yes          server signing = auto          log file = /var/log/samba/log.%m          log level = 3          max log size = 50    [data]          comment = Public Data Share          path = /data1/winshare          public = yes          writable = yes          inherit acls = yes          inherit permissions = yes          printable = no  

And here is my /etc/nsswitch.conf:

passwd:     files winbind  shadow:     files winbind  group:      files winbind  hosts:     files dns wins  

Actually, we have samba 3.6 running fine in a AIX 7.1 production environment, the above 3 configuration files are directly copied from AIX 7.1 (samba 3.6) to the new AIX 7.2 (samba 4.12).

Can anyone please let me know if there is anything wrong in my samba configuration? Thanks in advance.

"Urgent!!! My website hosting in GCP is down! Cannot access to my website~ "

Posted: 25 Mar 2021 09:29 PM PDT

Any one can help?

My website: www.fishersearch.com Status: Down and cannot access

Web hosting in Google Cloud Platform.

Rewriting based on GET request, taking part after last slash, and redirecting to a different page as GET parameter

Posted: 25 Mar 2021 09:13 PM PDT

Would like to rewrite all URLs that contain GET request of "route" and capture the string after the last slash, then redirect to search results based on the captured string.

https://www.example.com/index.php?_route_=Specialty%20Diet/healthy-food-Vegan/Some-Vitamin-20mg-100-Vegetable-Capsules  

Looking for below output:

https://www.example.com/catalogsearch/result/?q=Some-Vitamin-20mg-100-Vegetable-Capsules  

What I have so far:

RewriteCond %{QUERY_STRING} _route_=(.*\/([^\/]+)\/?) [NC]  RewriteCond %{REQUEST_URI}  index\.php      [NC]  RewriteRule .*  /catalogsearch/result/?q=%1 [R=301,L,NC,QSA]  

Result I'm getting:

https://www.example.com/catalogsearch/result/?q=%1&_route_=Specialty%20Diet/healthy-food-Vegan/Some-Vitamin-20mg-100-Vegetable-Capsules  

Cloud run as a proxy

Posted: 25 Mar 2021 08:35 PM PDT

Hi I am using cloud run as a proxy, i am using nginx image as a reverse proxy for my kubernetes services. Previously i was hitting my external load balancer ip. Now i want to hit my internal load balancer ip with the same. I have created an internal load balancer and set up my serverless vpc but while trying running my cloudrun it is giving me 501 error. My configuration for Docker file is below.

              server {                listen 80;                 location / {                    proxy_pass [InternalIp];                       }                   }  

nginx.conf file FROM nginx

                  COPY ./nginx.conf /etc/nginx/conf.d/default.conf  

my Dockerfile. Any help is appreciated.

How do I clone database schemas into a new database in the same Google Cloud PostgreSQL instance?

Posted: 25 Mar 2021 08:22 PM PDT

I'm new to postgres.

But I have a project that's using it. It has the tables, constraints, triggers and functions. Let's call this one DB1 for simplicity sake.

Now I want to create a new project based on the existing one. When it comes to the database, I need to create exactly the same database as the existing one but with empty data. Just like a new fresh start of the project. So any incremental number value starts from 0. Let's call the new database DB2.

Both projects are using the same Google Cloud PostgreSQL instance. So I went to GC console and created DB2 in the same instance.

Then I read through the documentations here and here, but it is for exporting and importing the whole database (including the data). In my case, I want a new database, empty data, but all tables, constraints, triggers, functions, all exist. As I understand it correctly, what I need to copy are the schemas from DB1 to DB2 (Please correct me if I'm wrong though).

Any help will be appreciated.

Is the ppa:ondrej/php repository safe?

Posted: 25 Mar 2021 09:11 PM PDT

I installed the ppa:ondrej/php repository for php and performed an apt-get upgrade. Little did I know it had a lot of libraries packages installed that I have no idea about, so I was wondering if they are safe to stay on the system or not?

What method can be used to resolve NMI backtrace for CPU 15?

Posted: 25 Mar 2021 07:55 PM PDT

An Xen Server virtual machine could not be pinging. View the interface of the machine from the Xen Server Console panel, as shown below:

NMI backtrace for cpu 15

CPU: 15 PID: 114 Comm: kworker/15:1 Tainted: G 3.16.0-11-amd64 #1 Debian 3.16.84-1

Hardware name: Xen HVM domU, BIOS 4.7.6-6.4 03/01/2019

I have to restart the virtual machine to fix the problem. This virtual machine is basically a Docker program running.

What I want to ask is what causes this problem? How to solve this problem? Or what can be done to prevent it from happening again?

Do glue records/child hosts override DNS wildcard entries or A records at the domain names DNS servers records?

Posted: 25 Mar 2021 07:38 PM PDT

Do glue records/child hosts override DNS wildcard entries or A records at the domain names DNS servers records?

example: ns1.example.com = 1.1.1.1 at registrar DNS glue /child host

ns1.example.com = 2.2.2.2 wild card entry at DNS server example.com

if i ping ns1.example.com from 8.8.8.8 or external dns internet, will it go to 1.1.1.1 or 2.2.2.2?

if so which RFC states this policy?

Docker Service Postgres migrate 9.6-alpine data to 13.2-alpine

Posted: 25 Mar 2021 07:30 PM PDT

I upgraded a while back from docker service postgres 9.5 to 9.6-alpine but cannot seem to get 9.6-alpine upgraded to 13.2-alpine.

Steps taken so far without success.

#!/bin/bash   # Upgrade test script 9.6-alpine to 13.2-alpine   #   docker exec greenlight_db_1 /usr/local/bin/pg_dump -U postgres -Fc postgres -f /var/lib/postgresql/data/production.dump   docker-compose down   # Make backups  cp -v -a db db.bak   mv -v db/production/production.dump .   rm -v -r db/   # Upgrade to 13.2-alpine  sed -i 's+9.6-alpine+13.2-alpine+g' docker-compose.yml   docker-compose up -d   docker-compose down   # Fix security  sed -i 's+md5+trust+g' db/production/pg_hba.conf   docker-compose up -d   # Wait for image to completely load before drop  echo -e "\x1B[96m ## Loading image. You will be prompted when complete... ## \x1B[0m"   sleep 90s # Waits 90 seconds.   read -p $'\e[1;33m## Image Loaded. Press [Enter] to continue. ##\e[0m '   docker exec greenlight_db_1 /usr/local/bin/psql -U postgres -c "DROP DATABASE greenlight_production;"   mv production.dump db/production/  docker exec greenlight_db_1 /usr/local/bin/pg_restore -U postgres -l /var/lib/postgresql/data/production.dump  

Current 9.6-alpine docker-compose.yml

version: '3'    services:    app:      entrypoint: [bin/start]      image: redzed:release-v2.8.2      container_name: bayden10-v2.8.2      env_file: .env      restart: unless-stopped      ports:        - 127.0.0.1:5000:80  # When using external logging  #    logging:  #      driver: $LOG_DRIVER  #      options:  #        syslog-address: $LOG_ADDRESS  #        tag: $LOG_TAG      volumes:        - ./log:/usr/src/app/log        - ./storage:/usr/src/app/storage  #      - ./terms.md:/usr/src/app/config/terms.md  # When using sqlite3 as the database  #      - ./db/production:/usr/src/app/db/production  # When using postgresql as the database      links:        - db    db:      image: postgres:9.6-alpine      restart: unless-stopped      ports:        - 127.0.0.1:5432:5432      volumes:        - ./db/production:/var/lib/postgresql/data      environment:        - POSTGRES_DB=postgres        - POSTGRES_USER=postgres        - POSTGRES_PASSWORD=[password]  

Results

; ; Archive created at 2021-03-23 16:05:50 UTC ; dbname: postgres ; TOC Entries: 9 ; Compression: -1 ; Dump Version: 1.13-0 ; Format: CUSTOM ; Integer: 4 bytes ; Offset: 8 bytes ; Dumped from database version: 9.6.21 ; Dumped by pg_dump version: 9.6.21 ; ; ; Selected TOC Entries: ; 3; 2615 2200 SCHEMA - public postgres 2120; 0 0 COMMENT - SCHEMA public postgres 1; 3079 12390 EXTENSION - plpgsql 2121; 0 0 COMMENT - EXTENSION plpgsql  

I'm not familiar enough with postgres or docker to start diving into differnt dump and upgrade methods. Any help would be appreciated.

~b10

<ask> replication user active directory between sub domain windows 2008 to windows 2019

Posted: 25 Mar 2021 07:00 PM PDT

i have a questions, i have one forest with name xyz.co.id and in this forest i have created sub domain with name fa.xyz.co.id (windows server 2008 R2), dc.xyz.co.id (windows server 2019) user object active directory located in fa.xyz.co.id, my question is how i can replicate my user active directory in fa.xyz.co.id to dc.xyz.co.id ? anybody can help me?

i try to check in active directory sites and services in fa.xyz.co.id but no list server dc.xyz.co.id

Is it possible to generate a team meeting URL with a specific user?

Posted: 25 Mar 2021 06:47 PM PDT

Here is Patrick.

I would ask is it possible to generate a team meeting URL with a specific user accordingly?

Let's say make a meeting for an hour.

Thanks, Patrick Fung

IOWAIT, High CPU in a debian 10 VM after reinstall

Posted: 25 Mar 2021 06:24 PM PDT

For some reasons i had to reinstall debian my linux VM, this is the point from the problems began. Before the reinstall there wasn't any problems like that.

I'm running a Webserver (Apache/Nginx), VoIP (TeamSpeak) on this VM (VMware).

The problem: When i'm using a basic install command or when I move some files, or some clients download something from the webserver the CPU usage, and the HDD usage are jump a lot and the hdd LED is constantly light. You can see on the screenshot there is about 6MB/s write speed with a high CPU and with 100% HDD utilization and the process is the jbd2/sda1-8. The main reason i want to fix this because the VoIP server when this happens the users experiencing lagging voice and this is slowing every service from this server. These problems only occur on the VM on the host there isn't any problem when the HDD doing IO about 200MB/s.

Here is a screenshot when i'm used apt install nodejs npm: Click here to view the screenshot

Server Host Specification:

  • i5-4590 - 4 Core
  • 16 GB RAM
  • 128 GB SSD
  • 1TB HDD
  • Windows Server 2019

VM Specification:

  • 3 Core of i5-4590
  • 6 GB RAM
  • 116GB SSD (/dev/sda1)
  • 180GB HDD (/dev/sdb1)
  • Debian 10

What I tried (but not works):

  • Installed open-vm-tools package.
  • Moving VoIP Server files to SSD for avoid lagging.

FreeRADIUS cannot bind to FreeIPA

Posted: 25 Mar 2021 06:07 PM PDT

I have installed FreeRADIUS and FreeIPA on the same machine running Fedora 33. IPA is working as expected and can have clients join and authenticate. LDAP command line tools (ldapsearch, ldapmodify) can successfully bind to the server both locally and over the network using the same credentials but when I try to start radiusd (either in debug mode or as a daemon) I get the error "Server is busy". CPU and RAM usage when testing is less than 10% so I don't think the server is overloaded. Below is the LDAP instantiation log. I have tried to find a log of attempted LDAP binds but have been unsuccessful.

Any advice would be greatly appreciated!

> # Instantiating module "ldap" from file /etc/raddb/mods-enabled/ldap  >     rlm_ldap: libldap vendor: OpenLDAP, version: 20450  >     accounting {  >      reference = "%{tolower:type.%{Acct-Status-Type}}"  >     }  >     post-auth {  >      reference = "."  >     }  >     rlm_ldap (ldap): Initialising connection pool  >     pool {  >      start = 5  >      min = 3  >      max = 32  >      spare = 10  >      uses = 0  >      lifetime = 0  >      cleanup_interval = 30  >      idle_timeout = 60  >      retry_delay = 30  >      spread = no  >     }  >     rlm_ldap (ldap): Opening additional connection (0), 1 of 32 pending slots used  >     rlm_ldap (ldap): Connecting to ldap://[servername hidden]:389  >     rlm_ldap (ldap): Waiting for bind result...  >     rlm_ldap (ldap): Bind with [credentials hidden] to ldap://[servername hidden]:389 failed: Server is busy  >     rlm_ldap (ldap): Opening connection failed (0)  >     rlm_ldap (ldap): Removing connection pool  >     /etc/raddb/mods-enabled/ldap[8]: Instantiation failed for module "ldap"  

EBS target response time increasing before CPU utilization

Posted: 25 Mar 2021 05:59 PM PDT

I'm in a company where we have an elastic beanstalk configuration and it works fine with our CICD. The only issue is that earlier today i ran a stress test (basically just disabled AWS shield and went hammer on the DDOS). Regardless of what we did we couldn't get our medium sized server to crash (this was with a i9 cpu - so just a gaming pc) where we ran a multithreaded python script just sending GET requests. We then downgraded to t2.small because regardless of what we put the servers through the target response time went to almost 8 seconds before the CPU utilization got over 50% every time. The autoscaling works as it's supposed to but even after the upgrade (2 servers per trigger) the response time was still around 4-6 seconds. The trigger we're using right now is that it should create 2 new instances if the response time exeeds 1.5 seconds and then cool down for 360 seconds if it gets below 1 second.

The system is running PHP with apache2.4. There hasn't really been made the biggest configs other than virtualhost configs.

It can't be the db since its read and write both are below 0.1 seconds. I can't seem to figure out how to get the response time down?

My Virtual Machine is not accessible today via my domain (server down?)

Posted: 25 Mar 2021 05:36 PM PDT

I wonder if someone know how to re-start the server to see if the problem can be fixed. Thank you in advance for your help, I am a beginner on this cloud platform.

How to repair 'net start task scheduler' is invalid?

Posted: 25 Mar 2021 06:32 PM PDT

I am using Windows Task Scheduler to automate my R script to convert PDF to Excel and it does not work.

I realize this net start task scheduler has some problem after since it's invalid but I do not know how to fix it.

What should I do if my net start task scheduler is invalid while my net start lanmanserver has already been started?

C:\WINDOWS\system32>net start task scheduler The service name is invalid.    More help is available by typing NET HELPMSG 2185.    C:\WINDOWS\system32>net start lanmanserver The requested service has already been started.    More help is available by typing NET HELPMSG 2182.  

iptables blocks iptables conflict

Posted: 25 Mar 2021 09:16 PM PDT

i have iptables running on my centos 7 server and i'm looking to block bots i use this command

iptables -A INPUT -s 70.42.131.0/24 -j DROP;  

this normally should block this range 70.42.131.0/24 however when i try to reach a website with my IP i cannot access to it and when i add ACCEPT tcp from everywhere the blocked ip can reach the website now i'm confused if i block a range should i remove ACCEPT from everywhere or i should i remove it or iptables depend on TABLE line number to take effect?

Chain INPUT (policy DROP)  target     prot opt source               destination           ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:websm  ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh  ACCEPT     icmp --  anywhere             anywhere              ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https  ACCEPT     gre  --  anywhere             anywhere                Chain FORWARD (policy ACCEPT)  target     prot opt source               destination           ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED  ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED    Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination           ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh  ACCEPT     icmp --  anywhere             anywhere              DROP       tcp  --  anywhere             static.76.1.16.24.clients.your-server.de  tcp dpt:http  ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain  

How to query Oracle VM REST API with curl?

Posted: 25 Mar 2021 08:54 PM PDT

I've searched in the Oracle VM documentation, but I wasn't able to find any example on how to query the Oracle VM REST API from curl or any similar alternative: https://hostname:port/ovm/core/wsapi/rest/

How to access internet from VM instance in Openstack?

Posted: 25 Mar 2021 05:54 PM PDT

I've been scratching my head for last 5 days, almost went to the end of Internet, read a lot of tutorials, made a lot of re installations an re configurations of openstack but at the and of the day could not resolve this problem... So i think you guys ( and girls ) are my last hope.

Okay to the point.

  • LAN - 192.168.0.0/24
  • Router IP (Gateway) - 192.168.0.1
  • MacOS Laptop - 192.168.0.192 / Has internet access
  • CentOS 7 running on VirtualBox with promiscuous mode set for all and with ip 192.168.0.22 / Has internet access

The problem is I cannot ping my physical router ( 192.168.0.1 ) nor access Internet from any of my VM instance and not even from external router ( linux namespace ). What is weird, I can ping VM instances (which have floating IP's) from my MacOS Laptop and CentOS7 VM. Also there is no problem to ssh them in both cases. In addition, firewalld is disabled on my host machine and ipv4 port forwarding is set to 1, port security setting are configured to allow traffic flow on port 80,20 and ICMP both Ingress and Egress.

Hope this set of configurations will help you find the bug. Thanks in advance !

My ip a command on CentOS 7: https://textuploader.com/16d1u

My packstack answer file: https://textuploader.com/16d1g

My /etc/sysconfig/network-scripts/ifcfg-br-ex configuration: https://textuploader.com/16d1p

My /etc/sysconfig/network-scripts/ifcfg-enp0s3 configuration: https://textuploader.com/16d1z

brctl show and ovs-vsctl show: https://textuploader.com/16d1i

neutron net-list: https://textuploader.com/16dta

Network topology: #https://ibb.co/drJS3Bf#

UPDATE 19.01.20

I've created a new external network in Openstack environment but this time as a gateway I used 192.168.0.22 which is my CentOS host ip (before I used 192.168.0.1) . After this, I can ping my physical router (192.168.0.1) from any place (VM instances, router namespace) but still cannot ping 8.8.8.8... When trying to, I get a "Redirect Host" message. -> https://pastebin.com/bSQhbkBc

AWS ECS: Unable to place task

Posted: 25 Mar 2021 07:00 PM PDT

I am trying to set up an AWS service with autoscaling. I have created a cluster with an application loadbalancer and created a task using a docker image that should be open on port 8080 for use. I have created a service based on that task for which I have set minimum and desired number of instances to 1 and maximum to 10, and created rules for scaling up and down. However, no new instances are created and all I get in the list of events at regular intervals is:

service microrecieverservice was unable to place a task because no container instance met all of its requirements. The closest matching container-instance 97d97ce9-967d-49ad-83ad-f4f904aae1f6 is already using a port required by your task. For more information, see the Troubleshooting section.

I have not been able to find anything relevant in the troubleshooting section. I have been able to manually add another instance to the cluster, but with no change in the events given. I could ssh into this instance, and there were two docker images: one was amazon-ecs-agent:latest and the other was my task definition. At this point I tried sending a REST request to the server to see if it would go through but got Connection Refused. At about the same time, the docker image restarted.

The container is not running anything else that would use port 8080 and when I do netstat -lntp the process using port 8080 is that of the my docker image.

How to detect if a process was killed by cgroup due to exceeding a limit?

Posted: 25 Mar 2021 06:05 PM PDT

I have a global cgroup defined in /etc/cgconfig.conf that limits the amount of memory. Everytime a user runs a command, I prepend cgexec to add the process and its children to the controlled group. Every now and then the limit kicks in and kills the user process.

If the exit code is not 0, how do I know if the process just failed because of some internal logic, or if it has been killed by the cgroup mechanism?

It's running in user space, so I'd like to avoid parsing /var/log/syslog.

rsync maintaining ownership and permissions between linux and windows ssh

Posted: 25 Mar 2021 09:03 PM PDT

I know this is is an extensively documented subject and despite the pages and pages of information ive found on it online I still cant find a definitive answer to this question so please bear with me.

I have a linux server and im trying to back up said server onto a windows machine through ssh. Im using rsync and running it from the windows box with cygwin. The problem is the permissions and the ownership from the linux server the files are being pulled from is being changed to the window's user pulling the files with cygwin. For example linux test file is owned by user dog. User cat on the windows machine initiates the command with cygwin to rsync the files over. When I look at the test file that got copied over they are not owned by user dog anymore. They are now changed to user cat and the group just says "none". I hope this is all clear enough. The permissions are not copying over correctly either. It's staying 644 on the windows machine no matter what the original permissions were.

Obviously if I need to restore the backup in the future im going to want to be able to copy everything back over and have the same ownership and permissions that were originally there or a lot of things are going to be broken. Ive seen people suggest making scripts that change chmod and chown as the files are being copied but my server has several different users and a lot of different set permissions just as im sure most of us have so im not trying to micromanage the backup process "that" much. The command im typing is

rsync -av -e "ssh -p 44" root@blahblah:/home/dog/backup /backup/

I'll explain this a little. Yes, im running it via root so that I can copy even root's files over. It is through a private key that is command forced or whatever its called to only run specific commands as to mitigate the potential security holes caused by this. (im aware you can run rsync via sudo but i havnt had much luck with it and dont really understand how its any better if you are able to specify the only thing the root user is allowed to do). The "noacl" command that cygwin has in the fstab folder has already been put in per (https://cygwin.com/ml/cygwin/2011-02/msg00116.html). From what I understand that should keep windows from trying to make sense of linux's way of doing user:group/permissions and changing this amok. Unfortunately this has had no affect and I am only 80% sure ive done it correctly. Also the windows machine initiating the backup is the administrator. The only thing I can guess at is ive read a lot about both sides having to be root for everything to copy over correctly. From what ive seen cygwin cant be run as root or even sudo and im guessing being the administrator and running cygwin as administrator would be the equivalent. Maybe not.

I really dont understand what im doing wrong or why it's not working. If rsync just cant copy linux's files without changing things then why is rsync a viable option when it comes to backing your server up onto your desktop? Am I missing something or am I just asking too much from this program? Any help or enlightenment that you guys can give me would be amazing. This is driving me crazy. (Sorry this is so long. Im trying to be as thorough as possible with my question to make it easier for everyone involved)

Rsyslog doesn't create log files on CentOS7

Posted: 25 Mar 2021 07:00 PM PDT

I have the following configuration file in "/etc/rsyslog.d/10-my.conf"

# This file is managed by Puppet, changes may be overwritten  if $programname == 'hello' then -/var/log/test/test.log  & ~  

On CentOS6.5 (rsyslog 5.8.10 ) this creates an empty file in /var/log/test/test.log Same configuration file on CentOS7(rsyslog 7.4.7) doesn't create an empty file.
Anyone can tell why is that? Did this behavior change in 7.4? Or is it something on my CentOS7 instance?
rsyslogd -f/etc/rsyslog.d/10-my.conf -N3
On CentOS6 and CentOS7 returns whole bunch of warnings but nothing serious.
CentOS7 SELinux is set to Permissive mode
CentOS 7 (/etc/rsyslog.conf)

# file is managed by puppet  #################  #### MODULES ####  #################    $ModLoad imuxsock # provides support for local system logging  $ModLoad imjournal # provides access to the systemd journal       ###########################  #### GLOBAL DIRECTIVES ####  ###########################  $MaxMessageSize 2k    #  # Set the default permissions for all log files.  #  $FileOwner root  $FileGroup root  $FileCreateMode 0600  $DirOwner root  $DirGroup root  $DirCreateMode 0750  $PrivDropToUser root  $PrivDropToGroup root  $WorkDirectory /var/lib/rsyslog  $Umask 0000    # Turn off message reception via local log socket;  # local messages are retrieved through imjournal now.  $OmitLocalLogging on    $IncludeConfig /etc/rsyslog.d/*.conf    #  # Emergencies are sent to everybody logged in.  #  *.emerg :omusrmsg:*  

How to log the IP that connects from outside of company to terminal server?

Posted: 25 Mar 2021 05:51 PM PDT

We have users RDP to company's terminal servers. Is there a way that I can track the ip address outside of company where users connect from?

I know there are logs avaliable under under terminal services in event log, but I dont see any public ip address in there for remote connection from outside of company.

Any idea?

fail2ban wont ban ssh from local hosts

Posted: 25 Mar 2021 06:05 PM PDT

I'm trying to configure fail2ban to block ssh from a local hosts. Fail2ban is install on CentOS 7 with firewall (Linux 3.10.0-229.4.2.el7.x86_64 x86_64 ). I have copied the jail.conf to jail.local i have change the following parameters in jail.local:

banaction = firewallcmd-new  [sshd]  enabled = true  maxretry = 5  port = ssh  logpath = /var/log/secure  action = firewallcmd-ipset  

And i have no results. Any idea ?

Some log info:

Jun 23 07:21:33 localhost.localdomain fail2ban-client[2486]: 2015-06-23 07:21:33,351 fail2ban.server         [2487]: INFO    Starting Fail2ban v0.9.1  Jun 23 07:21:33 localhost.localdomain fail2ban-client[2486]: 2015-06-23 07:21:33,351 fail2ban.server         [2487]: INFO    Starting in daemon mode  Jun 23 07:21:33 localhost.localdomain systemd[1]: Started Fail2Ban Service.    2015-06-23 07:14:27,571 fail2ban.server         [1926]: INFO    Changed logging target to /var/log/fail2ban.log for Fail2ban v0.9.1  2015-06-23 07:14:27,710 fail2ban.database       [1926]: INFO    Connected to fail2ban persistent database '/var/lib/fail2ban/fail2ban.sqlite3'  2015-06-23 07:14:27,788 fail2ban.jail           [1926]: INFO    Creating new jail 'sshd'  2015-06-23 07:14:27,923 fail2ban.jail           [1926]: INFO    Jail 'sshd' uses poller  2015-06-23 07:14:27,985 fail2ban.filter         [1926]: INFO    Set jail log file encoding to UTF-8  2015-06-23 07:14:27,985 fail2ban.jail           [1926]: INFO    Initiated 'polling' backend  2015-06-23 07:14:28,063 fail2ban.filter         [1926]: INFO    Added logfile = /var/log/secure  2015-06-23 07:14:28,064 fail2ban.filter         [1926]: INFO    Set maxRetry = 2  2015-06-23 07:14:28,066 fail2ban.filter         [1926]: INFO    Set jail log file encoding to UTF-8  2015-06-23 07:14:28,066 fail2ban.actions        [1926]: INFO    Set banTime = 86400  2015-06-23 07:14:28,067 fail2ban.filter         [1926]: INFO    Set findtime = 600  2015-06-23 07:14:28,068 fail2ban.filter         [1926]: INFO    Set maxlines = 10  2015-06-23 07:14:28,158 fail2ban.server         [1926]: INFO    Jail sshd is not a JournalFilter instance  2015-06-23 07:14:28,459 fail2ban.jail           [1926]: INFO    Jail 'sshd' started  2015-06-23 07:21:32,667 fail2ban.server         [1926]: INFO    Stopping all jails  2015-06-23 07:21:33,181 fail2ban.jail           [1926]: INFO    Jail 'sshd' stopped  2015-06-23 07:21:33,188 fail2ban.server         [1926]: INFO    Exiting Fail2ban  2015-06-23 07:21:33,404 fail2ban.server         [2489]: INFO    Changed logging target to /var/log/fail2ban.log for Fail2ban v0.9.1  2015-06-23 07:21:33,406 fail2ban.database       [2489]: INFO    Connected to fail2ban persistent database '/var/lib/fail2ban/fail2ban.sqlite3'  2015-06-23 07:21:33,409 fail2ban.jail           [2489]: INFO    Creating new jail 'sshd'  2015-06-23 07:21:33,413 fail2ban.jail           [2489]: INFO    Jail 'sshd' uses poller  2015-06-23 07:21:33,433 fail2ban.filter         [2489]: INFO    Set jail log file encoding to UTF-8  2015-06-23 07:21:33,433 fail2ban.jail           [2489]: INFO    Initiated 'polling' backend  2015-06-23 07:21:33,438 fail2ban.filter         [2489]: INFO    Added logfile = /var/log/secure  2015-06-23 07:21:33,439 fail2ban.filter         [2489]: INFO    Set maxRetry = 3  2015-06-23 07:21:33,440 fail2ban.filter         [2489]: INFO    Set jail log file encoding to UTF-8  2015-06-23 07:21:33,441 fail2ban.actions        [2489]: INFO    Set banTime = 86400  2015-06-23 07:21:33,442 fail2ban.filter         [2489]: INFO    Set findtime = 600  2015-06-23 07:21:33,442 fail2ban.filter         [2489]: INFO    Set maxlines = 10  2015-06-23 07:21:33,501 fail2ban.server         [2489]: INFO    Jail sshd is not a JournalFilter instance  2015-06-23 07:21:33,599 fail2ban.jail           [2489]: INFO    Jail 'sshd' started  

And SELinux is disabled.

Multicast works only in promiscuous mode

Posted: 25 Mar 2021 08:07 PM PDT

I'm trying to receive MPEG-TS over UDP multicast transport in Arch Linux.

So when I run ffprobe -i udp://@224.1.1.240:6000 it hangs forever with no result. Then I run tcpdump and it shows no multicast traffic from the address.

But if there is running tcpdump -i eth0 -n net 224.0.0.0/4 in background while ffprobe, it works! tcpdump shows packets and ffprobe correctly detects a stream.

As one may notice the problem likely dissappears while NIC is in promiscuous mode.

Can someone help with it? What's wrong with my config?

  • Everything in iptables is ACCEPTed.
  • cat /proc/sys/net/ipv4/conf/*/rp_filter 0 0 0 0 0 0 0
  • ip r default dev ppp0 scope link 83.221.214.192 dev ppp0 proto kernel scope link src 10.7.248.143 192.168.168.192/28 dev enp3s0 proto kernel scope link src 192.168.168.193 224.0.0.0/4 dev enp3s0 scope link

Network connected to ISP through D-LINK DGS-1005A.

PS Everything works perfect in Windows 7 on the same PC.

Is it safe to symlink complete /var directory

Posted: 25 Mar 2021 06:50 PM PDT

Is it safe to move complete /var directory in separate partition and create symlink to it?

mv /var /mnt/storage  ln -s /mnt/storage /var  

Distribution is Google Cloud CentOS Image

I do not want to use bind mount due the following reasons:

https://unix.stackexchange.com/questions/49623/are-there-any-drawbacks-from-using-mount-bind-as-a-substitute-for-symbolic-lin

fastcgi cache how-to cache for logged-in users and make it custom for each user

Posted: 25 Mar 2021 08:07 PM PDT

Currently I'm doing cache using fastcgi_cache for non-logged-in users, and using ( if + fastcgi_no_cache + fastcgi_cache_bypass ) to pass logged-in users directly to backend which is PHP-FPM.

this work good enough, but when PHP-FPM start hitting 500+ req/s the slow/load start.

So what i'm thinking about is to create a cache for logged-in users and each user has it's own cached files, is that possible? if yes can you please provide me some tips about that. I've goggled a lot but nothing help with that.

the site running custom php cms with mysql and memcached and apc

cat /etc/nginx/nginx.comf

user  username username;    worker_processes     8;  worker_rlimit_nofile 20480;    pid /var/run/nginx.pid;    events {        worker_connections 10240;      use epoll;  }    http {      include       mime.types;      default_type  application/octet-stream;          log_format main '$remote_addr - $remote_user [$time_local] '              '"$request" $status  $body_bytes_sent "$http_referer" '              '"$http_user_agent" "$http_x_forwarded_for"';      access_log      off;      error_log   /var/log/nginx/error.log    warn;      log_not_found       off;      log_subrequest      off;        server_tokens       off;      sendfile        on;      tcp_nopush          on;      tcp_nodelay         on;      keepalive_timeout   3;      keepalive_requests  50;      send_timeout        120;          connection_pool_size    256;      chunked_transfer_encoding on;      ignore_invalid_headers   on;      client_header_timeout   60;       large_client_header_buffers 4 128k;      client_body_in_file_only off;      client_body_buffer_size 512K;      client_max_body_size    4M;      client_body_timeout 60;      request_pool_size   32k;      reset_timedout_connection on;      server_name_in_redirect off;      server_names_hash_max_size 4096;      server_names_hash_bucket_size 256;      underscores_in_headers  off;      variables_hash_max_size 4096;      variables_hash_bucket_size 256;        gzip            on;      gzip_buffers        4 32k;      gzip_comp_level     1;      gzip_disable            "MSIE [1-6]\.";      gzip_min_length     0;      gzip_proxied        any;      gzip_types      text/plain text/css application/x-javascript text/javascript text/xml application/xml application/xml+rss application/atom+xml;        open_file_cache     max=3000 inactive=20s;      open_file_cache_min_uses 1;      open_file_cache_valid   20s;      open_file_cache_errors  off;        fastcgi_buffer_size     8k;      fastcgi_buffers         512 8k;      fastcgi_busy_buffers_size   16k;      fastcgi_cache_methods   GET HEAD;      fastcgi_cache_min_uses  1;      fastcgi_cache_path /dev/shm/nginx levels=1:2 keys_zone=website:2000m inactive=1d max_size=2000m;      fastcgi_connect_timeout 60;      fastcgi_intercept_errors on;      fastcgi_pass_request_body on;      fastcgi_pass_request_headers on;      fastcgi_read_timeout    120;      fastcgi_send_timeout    120;      proxy_temp_file_write_size 16k;        fastcgi_max_temp_file_size  1024m;        include /etc/nginx/vhosts/*.conf;    }  

vhost settings :

server {        listen 80;      server_name domain.com;        access_log  off;      error_log  /var/log/nginx/error.log warn;      root /home/username/public_html;        location ~ \.php$ {            # pass cache if logged in          set $nocache "";                  if ($http_cookie ~ (MyCookieUser*|MyCookiePass*)) {                    set $nocache "Y";                  }                  fastcgi_no_cache $nocache;                  fastcgi_cache_bypass $nocache;          fastcgi_cache       website;          fastcgi_cache_key         $host$uri$is_args$args;          fastcgi_cache_valid       200 301 302 304 40s;          fastcgi_cache_valid       any 5s;          fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503 http_404;          fastcgi_ignore_headers  Set-Cookie;          fastcgi_hide_header     Set-Cookie;          fastcgi_ignore_headers  Cache-Control;          fastcgi_hide_header     Cache-Control;          fastcgi_ignore_headers  Expires;          fastcgi_hide_header     Expires;                  fastcgi_no_cache $nocache;                  fastcgi_cache_bypass $nocache;                  fastcgi_index  index.php;                  fastcgi_pass 127.0.0.1:8081;                  fastcgi_param  SCRIPT_FILENAME  /home/username/public_html$fastcgi_script_name;                  include /etc/nginx/fastcgi_params;        }        location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|ppt|txt|mid|swf|midi|wav|bmp|js)$ {          root            /home/username/public_html;          expires             max;          add_header          Cache-Control   cache;      }    }  

php-fpm config

emergency_restart_threshold = 10  emergency_restart_interval = 60s  process_control_timeout =10s  rlimit_files = 102400  events.mechanism = epoll  [www]  user = username  group = username  listen = 127.0.0.1:8081  listen.backlog = 10000  pm = dynamic  pm.max_children = 2048  pm.start_servers = 64  pm.min_spare_servers = 20  pm.max_spare_servers = 128  pm.process_idle_timeout = 10s;  pm.max_requests = 50000  request_slowlog_timeout = 40s  request_terminate_timeout = 60s  

Server RAM : 32GB DDR3 Processor : Dual E5620 Centos6 64bit

Juniper Network Connect and Ubuntu

Posted: 25 Mar 2021 09:03 PM PDT

I'm trying to install the Juniper Network Connect client on Ubuntu -- I'm wondering if any seasoned network admins or otherwise knowledgeable individuals know if it's possible for me to download the client directly from a 3rd party source, or if it is mandatory that I download it from the given network's vpn website. If the latter, can you explain why?

No comments:

Post a Comment