Sunday, December 5, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How to switch from docker swarm to kubernetes?

Posted: 05 Dec 2021 11:35 AM PST

We have a docker swarm setup on four aws ec2 ubuntu machines. Two of them acting as managers.

Now, instead of managing ourselves, we want to port to a managed service like aks/eks for kubernetes.

I am not able to find any so far, so thought to migrate our work loads to kubernetes.

As part of that, we are running docker service command for different databases so that whenever a test database is required(postgres,oracle,mysql,etc) we run a command in docker swarm to create a service and we get a random port which we connect using the manager public ip.

Example:

for postgres:

docker service create --name postgres_31 -p target=5432 -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=admin -e POSTGRES_DB=testdb -d postgres:9.6.18  

The above will create a random port for postgres for 5432 port

like below it generated 30048

So, the user connects with public ip of manager node and port 30048 which will redirect to 5432 port of database.

docker service ls  p45fazswicxc        postgres_31         replicated          1/1                 postgres:13                                  *:30048->5432/tcp  

In this way, we used to create multiple databases of postgres (or other databases) on swarm (for testing purposes for dev team).

How to replicate this environment on kubernetes?

In kubernetes, it seems it generates separate public IP for the load balancer type service for each deployment, but we need a single one. I thought of keeping an azure load balancer externally and redirect all, but the configuration is for http only I guess and can be only for a single port not with multiple random ports, correct me if I am wrong.

Local DNS not resolving local hostname but DIG does

Posted: 05 Dec 2021 11:19 AM PST

I cannot figure out why this behavior. I have two pihole instances running in docker containers on 10.0.2.205 and 10.0.2.206 (sync'd). Running resolvectl status on the local pc/server results in the correct DNS Resolvers being used, including the tertiary IP 1.1.1.1; all handed out by my DHCP server. Both local DNS (pihole) resolve as expected.

My problem, I spin up a new server and run into this issue every time and I would like to once and for all be done with this. I don't think this should require overwriting the /etc/resolv.conf file every time I spin up a server.

The server can ping whatever, just nothing that is set as a local DNS entry in pihole. However, I can dig @10.0.2.205 <local_hostname> and get a result/IP. So the DNS resolver returns the IP address for the local DNS entry for the local hostname, as expected. But I can't ping the DNS hostname. If, however, I overwrite the /etc/resolv.conf (as mentioned before) with the IP address of one or both of the pihole resolvers, then all works as expected. So I don't believe this to be a DNS resolver issue but a local issue with systemd-resolved.service or other. I've seen numerous discussions on this topic and have come to the conclusion that there's still much confusion and no real clear and straight forward answers/fixes. That I've found anyway. Any experience and info to resolve this would be most appreciated. Please advise. Thank you very much

Server info:

noc@TestingServer:~$ hostnamectl      Static hostname: TestingServer           Icon name: computer-vm             Chassis: vm          Machine ID: 5ab2c4f3f6d2413a9684ada5dc6e87af             Boot ID: 1f73d83492724511821bdf91a6a8cdf1      Virtualization: kvm    Operating System: Ubuntu 20.04.3 LTS              Kernel: Linux 5.4.0-91-generic        Architecture: x86-64  

**/etc/hosts (default from initial installation) **

noc@TestingServer:~$ cat /etc/hosts  # Your system has configured 'manage_etc_hosts' as True.  # As a result, if you wish for changes to this file to persist  # then you will need to either  # a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl  # b.) change or remove the value of 'manage_etc_hosts' in  #     /etc/cloud/cloud.cfg or cloud-config from user-data  #  127.0.1.1 TestingServer TestingServer  127.0.0.1 localhost    # The following lines are desirable for IPv6 capable hosts  ::1 ip6-localhost ip6-loopback  fe00::0 ip6-localnet  ff00::0 ip6-mcastprefix  ff02::1 ip6-allnodes  ff02::2 ip6-allrouters  ff02::3 ip6-allhosts    

Command line

noc@TestingServer:~$ ping docker1  ping: docker1: Temporary failure in name resolution  noc@TestingServer:~$  noc@TestingServer:~$ ping docker2  ping: docker2: Temporary failure in name resolution  noc@TestingServer:~$  noc@TestingServer:~$ ping docker3  ping: docker3: Temporary failure in name resolution  noc@TestingServer:~$  noc@TestingServer:~$ cat /etc/resolv.conf   # This file is managed by man:systemd-resolved(8). Do not edit.  #  # This is a dynamic resolv.conf file for connecting local clients to the  # internal DNS stub resolver of systemd-resolved. This file lists all  # configured search domains.  #  # Run "resolvectl status" to see details about the uplink DNS servers  # currently in use.  #  # Third party programs must not access this file directly, but only through the  # symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,  # replace this symlink by a static file or a different symlink.  #  # See man:systemd-resolved.service(8) for details about the supported modes of  # operation for /etc/resolv.conf.    nameserver 127.0.0.53  options edns0 trust-ad  noc@TestingServer:~$  noc@TestingServer:~$  noc@TestingServer:~$ resolvectl status | grep -A2 -E 'DNS Server'    Current DNS Server: 10.0.2.205           DNS Servers: 10.0.2.205                        10.0.2.206                        1.1.1.1  noc@TestingServer:~$  noc@TestingServer:~$  noc@TestingServer:~$ dig @10.0.2.205 docker1    ; <<>> DiG 9.16.1-Ubuntu <<>> @10.0.2.205 docker1  ; (1 server found)  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37841  ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 4096  ;; QUESTION SECTION:  ;docker1.           IN  A    ;; ANSWER SECTION:  docker1.        0   IN  A   10.0.2.10    ;; Query time: 0 msec  ;; SERVER: 10.0.2.205#53(10.0.2.205)  ;; WHEN: Sun Dec 05 09:29:04 MST 2021  ;; MSG SIZE  rcvd: 52    noc@TestingServer:~$ ping google.com  PING google.com (172.217.11.174) 56(84) bytes of data.  64 bytes from lax28s15-in-f14.1e100.net (172.217.11.174): icmp_seq=1 ttl=116 time=46.1 ms  ^C  --- google.com ping statistics ---  1 packets transmitted, 1 received, 0% packet loss, time 0ms  rtt min/avg/max/mdev = 46.132/46.132/46.132/0.000 ms  noc@TestingServer:~$  noc@TestingServer:~$  noc@TestingServer:~$ sudo su  root@TestingServer:/home/noc# echo -e "nameserver 10.0.2.205\n" > /etc/resolv.conf  root@TestingServer:/home/noc# exit  exit  noc@TestingServer:~$  noc@TestingServer:~$ ping docker1  PING docker1 (10.0.2.10) 56(84) bytes of data.  64 bytes from docker1.local (10.0.2.10): icmp_seq=1 ttl=64 time=0.617 ms  ^C  --- docker1 ping statistics ---  1 packets transmitted, 1 received, 0% packet loss, time 0ms  rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms    

Truenas Scale and VM Networking

Posted: 05 Dec 2021 10:32 AM PST

I've installed Truenas Scale on a dedicated server from an hosting service. This server have a direct internet connection "eno1" with public ip "141.x.x.x" received by dhcp

i want to host virtuals machines which can access to internet and run some service (http...) that need to be reach from internet.

I've tried to :

  • create bridge "br0" adding ip "141.x.x.x" on "eno1"
  • removing dhcp on "eno1"

But when i test change they don't work, so truenas scale rollback configuration.

What are my mistake or what config i have to setup to host my MV.

Thanks

Nginx downloads me PHP files instead of executing them

Posted: 05 Dec 2021 10:30 AM PST

I am using Nginx for my web server, but when I go to a PHP page it downloads it to me.
I realized that my pterodactyl panel (which is in php) was still working, so I used the fastcgi and other parts of its config, without this changing my problem
I have Nginx last version and PHP 8.0 installed on a Debian 11 VPS.
The files are under permission 775 and owned by the group www-data. The logs aren't giving me any reason for this problem.

server {      listen 80;      # SSL configuration      #      # listen 443 ssl default_server;      # listen [::]:443 ssl default_server;      #      # Note: You should disable gzip for SSL traffic.      # See: https://bugs.debian.org/773332      #      # Read up on ssl_ciphers to ensure a secure configuration.      # See: https://bugs.debian.org/765782      #      # Self signed certs generated by the ssl-cert package      # Don't use them in a production server!      #      # include snippets/snakeoil.conf;        root /var/www/html/site;      index index.html index.php index.htm index.nginx-debian.html;      server_name mondomaine.eu www.mondomaine.eu;        charset utf-8;        location / {          try_files $uri $uri/ /index.php?$query_string;      }          location ~ \.php$ {          fastcgi_split_path_info ^(.+\.php)(/.+)$;          fastcgi_pass unix:/run/php/php8.0-fpm.sock;          fastcgi_index index.php;          include fastcgi_params;          fastcgi_param PHP_VALUE "upload_max_filesize = 100M \n post_max_size=100M";          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          fastcgi_param HTTP_PROXY "";          fastcgi_intercept_errors off;          fastcgi_buffer_size 16k;          fastcgi_buffers 4 16k;          fastcgi_connect_timeout 300;          fastcgi_send_timeout 300;          fastcgi_read_timeout 300;      }        location ~ /\.ht {          deny all;      }  }  

Can you help please? Thanks

Why squid authentication fails?

Posted: 05 Dec 2021 10:12 AM PST

I'm trying to add authentication to squid installed on windows, here is the config:

auth_param basic program C:\Squid\lib\squid\basic_ncsa_auth.exe C:\Squid\etc\squid\.htpasswd  acl ncsa_users proxy_auth REQUIRED  http_access allow ncsa_users    http_port 2001    http_access allow localnet  http_access deny all  

In .htpasswd

admin:$apr1$kWA/DRFy$klaeXRe3S3jIPqc64HTMA0    

This corresponds to username admin and password 1234

But once I try to use the proxy, got TCP_DENIED/407 in logs, and auth fails.

Update:

found the following error in log C:/Squid/lib/squid/basic_ncsa_auth.exe: error while loading shared libraries: cygcrypt-2.dll: cannot open shared object file: No such file or directory

Allow an IP to use my own SSH alias?

Posted: 05 Dec 2021 10:46 AM PST

How do I allow an IP to use my own SSH alias defined in my client (desktop) SSH config, to connect to my server?

I created a wheel user and disabled root login, and I have also set up a custom port for my SSH.

Now I want to enable a remote IP (support personnel) to use my own alias to connect to my server.

The reason is: To hide the wheel username and the port number from the support personnel.

Proper setup to backup VM servers

Posted: 05 Dec 2021 09:45 AM PST

I'm maintaining a number of small servers used internally in our company. That's like 1% of my job description, but nonetheless I want to do it properly. We use the free VMware license to run a couple of services in a Dell server with 32GB of availible storage (in RAID10). Each service has a separate VM. All are Debian and 1 is Windows. The services are: 1 openvpn, 1 nginx, 1 cloud storage, 1 software licensing (windows), 1 gitlab etc.

We have a 32TB NAS which is unused and I wanted to make an local (NAS) backup daily, and an external backup weekly. My goal with the backup is such , that in case of failure of the server, is can restore each machine by just copying the data. A dd image of the drives would be best here, but that's not differential and it wouldn't go well with the underlying VM storage, which is dynamically allocated.

We're a small company so there's no way of paying for fancy VM management software.

My first thought was to use a tiny dedicated VM running the backup services. It would have the NAS mounted as NFS. The backup would go through rsync/rsnapshot (just linux for now). What would be the better (in terms of security) way of doing this?:

A. Have cron on the backup server and ssh/rsync/rsnapshot to all the other servers. (easier to maintain but a breach this server would give root access to all the other servers)

B. Have each server ssh/rsync/rsnapshot to the backup server. (bit messy, but a breach of any single machine would limit the damage to that machine)

C. Other?

And a few followup questions:

  1. What about the windows machine? Any chance to make an rsync copy that would be complete and bootable?
  2. What do you think about encrypting the backups? I know that this goes against my goal of "easy to restore", but lets say that it's just for the remote backups? The secret key would be on the backup server with a (dozen safe copies including on paper xD) and the files would be encrypted when sending. Is this viable? Can it work with rsync?

netplan apply/try/generate ends with ERROR

Posted: 05 Dec 2021 09:27 AM PST

We have cloud infrastructure based on VMWare with Windows and Linux VMs. After last reboot 4 of the Ubuntu (3 Ubuntu 20.04 and one Ubuntu 16.04) servers did not start network interface. With lshw -class network I see correct network interface listed. There is no DHCP in the network, all servers use static IP's. After reboot in networkctl OPERATIONAL column for the specific interface is OFF. Only way to get network working is with following IP command sequence, but after reboot everything is gone:

$ip link set <link_name> up  $ip addr add <server-ip>/24 dev <link_name>  $ip route add default via <gateway> dev <link_name>  

Looks like the problem is with netplan. I have netplan config, that is deployed together with server, when created from template and it works great on all the other Ubuntu servers in this infrastructure except those 4 servers. Also it worked on those servers until this weeks reboot (we update and reboot once a month usually) Config looks like this:

network:    version: 2    renderer: networkd    ethernets:      <link_name>:        dhcp4: no        dhcp6: no        addresses:          - <server_ip>/24        gateway4: <gateway>        nameservers:          search:            - <domain>          addresses:            - <dns_1>            - <dns_2>  

But when trying to netplan apply , netplan generate or netplan try, it returns strange ERROR, I cant find anything about in the internet.( I substituted my gateway IP with <correct_gateway> and the other IP in this operations with <some_random_ip> for security purposes)

ERROR:src/parse.c:1120:handle_gateway4: assertion failed (scalar(node) == cur_netdef->gateway4): ("<correct_gateway>" == "<some_random_ip>")  Bail out! ERROR:src/parse.c:1120:handle_gateway4: assertion failed (scalar(node) == cur_netdef->gateway4): ("<correct_gateway>" == "<some_random_ip>")    

If I add some indentation mistake in *.yaml config file it returns normal Error message that points to this mistake.

I tried to reinstall netplan.io without any luck and don't have an idea what to try next.

Iterations of JavaScript Eclipse IDE

Posted: 05 Dec 2021 09:56 AM PST

Hello good! I want to do that when I do 5 iterations of the program, a year will have passed. How can I do it? Thanks.

Proposed statement: Each time we do five iterations of the program it will be a year.

• Each year the church will receive the amount of 2500 * institutions and will be added to its fortune. This money will come from the village, so we will subtract 2 units from the morale of the village.

• The population will increase by 1,000 inhabitants.

• The army will increase by 500 inhabitants.

I just want to know how to do exactly that when there are 5 iterations a 1 year has already passed.

PD: I know very little about programming, I'm new to this and this help would come in handy to move forward on this issue.

Thanks again.

Reach https service locally and from internet

Posted: 05 Dec 2021 08:42 AM PST

I've got a server in my lan that connects to a rented server on the internet via vpn, which publishes the service via https on a specific subdomain.

I want machines connected to the lan to use the lan to connect to the service using the same subdomain as those elsewhere that go though the internet server. I can add an entry on the lan's dns to "shortcut" the public subdomain to the local ip, but how do I get https working?

Command to get SFTP client to execute sudo at every file save

Posted: 05 Dec 2021 08:31 AM PST

Will this command get my Ubuntu SFTP client execute sudo at every file save for the specified domain?

sftp -s "sudo /usr/lib/openssh/sftp-server" targethost.fqdn  

That's what I would like, but I am not sure that this is the right command.

This would be saving as a wheel user, and needed to be able to modify and save files owned by root.

So should it be

sftp -s "sudo /usr/lib/openssh/sftp-server" myhostname.fqdn  

and shouldn't I use my IP instead of my hostname?

ZFS send/recv full snapshot

Posted: 05 Dec 2021 08:19 AM PST

I have been backing up my ZFS pool in Server A to Server B (backup server) via zfs send/recv, and using daily incremental snapshots.

Due to hardware issues, the ZFS pool in Server A is now gone - and I want to restore/recover it asap.

Currently the snapshot list in my Server B is as follows :

zfs49/tank@2021Nov301705   368G      -     3.52T  -  zfs49/tank@2021Dec011705  65.2G      -     3.52T  -  zfs49/tank@2021Dec021705  66.4G      -     3.52T  -  zfs49/tank@2021Dec031705     0B      -     3.52T  -  

where zfs49/tank@2021Dec031705 is the latest.

I would like to send back the whole pool (including the snapshots) back to Server A, but I'm unsure of the exact command to run.

Question : On Server B, will doing zfs send zfs49/tank@2021Dec031705 | ssh <Server A host ip> zfs recv tank be sufficient to receive the full ZFS pool + all the snapshots (so I can continue incremental send/recv backups) on Server A?

Google Cloud SQL - Database instance storage size increased dramatically everyday

Posted: 05 Dec 2021 08:38 AM PST

I have a database instance (MySQL 8) on Google Cloud and since 20 days ago, the instance's storage usage just keeps increasing (approx 2Gb every single day!). But I couldn't find out why.

What I have done:

  1. Take a look at Point-in-time recovery "Point-in-time recovery" option, it's already disabled.
  2. Binary logs is not enabled.
  3. Check the actual database size and I see my database is just only 10GB in size
  4. No innodb_per_table flag, so it must be "false" by default

The actual database size is 10GB, now the storage usage takes up to 220GB! That's a lot of money!

I couldn't resolve this issue, please give me some ideal tips. Thank you!

Windows Server DNS Server Failure

Posted: 05 Dec 2021 11:27 AM PST

I am having some issues with Windows Server 2022's DNS resolution and was hoping to get some insights. I have included some screenshots throughout the post

The server in question is running on HyperV, and is setup as an Active Directory Domain Controller with the DNS and DHCP roles installed. I have setup my DNS Forwarders as shown in this screenshot

I've noticed an event showing up in the event logs a fair amount saying The DNS server encountered an invalid domain name in a packet from 1.1.1.1. The packet will be rejected. The event data contains the DNS packet. Trying to run an nslookup on the domains shown in the packet will result in a failure, however pointing nslookup at my forwarders directly will resolve correctly.

It seems that there are some domains that fail more than others, however after a while the failed domains may begin to resolve correctly until I clear the DNS cache. The domain name I've been testing with is token.safebrowsing.apple as I've found it fails the most reliably, however I have seen this happening with all manner of domains, including www.icann.org. Just browsing the internet, I've found websites will fail to resolve maybe 5% of the time?

This is the error nslookup returns, however after a while it'll stop even trying to query the forwarders and simply return this error. As mentioned above, pointing nslookup directly at 1.1.1.1 will work correctly

I have run WireShark to try and get to the bottom of this, and you can see as the DNS Service tries to query each forwarder, getting a server failure from each one before returning to the client (in this case, 10.10.0.55) with servfail. The DC/DNS Server is configured on 10.100.0.30. Here's a screenshot of the packet view (packet 83 from previous screenshot)

So far I've tried using different forwarders (I've found that removing 1.1.1.1 and leaving just 8.8.8.8 stopped the error in the event viewer but not the actual resolution error). I've also tried playing with the DNSSec settings and trust points, removing all forwarders and just using root hints, disabling the server from listening on it's IPv6 address, and enabling/disabling various options under advanced properties, to no avail. I have also tried increasing the timeout times but still nothing.

Been scratching my head for a little while so any advice would be amazing! Please let me know if any more info is required.

Thanks!

django rest nginx server static in port 8000

Posted: 05 Dec 2021 08:36 AM PST

I am having trouble with showing uploaded media.

when I hit this URL: http://localhost:8001/media/avatars/Max.jpeg the image is found, it's working great,

but in django rest, it's showing my image url is: http://localhost/media/avatars/Max.jpeg which is wrong, coz, my server is running in different port. Also i dont want port 80 to serve my images.

enter image description here

This is nginx config

server {      listen 8001;        location /static {          alias /backend/staticfiles;      }      location /media {          alias /backend/media;      }        location / {          proxy_pass http://backend:8000;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header Host $host;          proxy_redirect off;      }  }  

My question is what's wrong with django rest framework? why it's serializing in port 80 instead of 80001? coz, my backend running on port 8001

How do I fix issue with renewing my certbot certificates on ubuntu

Posted: 05 Dec 2021 09:03 AM PST

I am trying to renew my certbot certificates running the command cerbot renew and I get this error

2021-12-02 10:46:30,686:INFO:certbot.plugins.selection:Plugins selected: Authenticator nginx, Installer nginx  2021-12-02 10:46:30,779:DEBUG:acme.client:Sending GET request to https://acme-staging-v02.api.letsencrypt.org/directory.  2021-12-02 10:46:30,783:INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): acme-staging-v02.api.letsencrypt.org  2021-12-02 10:46:30,960:WARNING:certbot.renewal:Attempting to renew cert (ventureserp.com) from /etc/letsencrypt/renewal/ventureserp.com.conf produced an unexpected error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645). Skipping.  2021-12-02 10:46:30,975:DEBUG:certbot.renewal:Traceback was:  Traceback (most recent call last):    File "/usr/local/lib/python3.5/dist-packages/requests/packages/urllib3/connectionpool.py", line 595, in urlopen      chunked=chunked)    File "/usr/local/lib/python3.5/dist-packages/requests/packages/urllib3/connectionpool.py", line 352, in _make_request      self._validate_conn(conn)    File "/usr/local/lib/python3.5/dist-packages/requests/packages/urllib3/connectionpool.py", line 831, in _validate_conn      conn.connect()    File "/usr/local/lib/python3.5/dist-packages/requests/packages/urllib3/connection.py", line 289, in connect      ssl_version=resolved_ssl_version)    File "/usr/local/lib/python3.5/dist-packages/requests/packages/urllib3/util/ssl_.py", line 308, in ssl_wrap_socket      return context.wrap_socket(sock, server_hostname=server_hostname)    File "/usr/lib/python3.5/ssl.py", line 377, in wrap_socket      _context=self)    File "/usr/lib/python3.5/ssl.py", line 752, in __init__      self.do_handshake()    File "/usr/lib/python3.5/ssl.py", line 988, in do_handshake      self._sslobj.do_handshake()    File "/usr/lib/python3.5/ssl.py", line 633, in do_handshake      self._sslobj.do_handshake()  ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)  

When I run cerbot --version it gives me 0.31.0 which seems to be the latest version of certbot, so I am not really sure why this is happening? I have gone through numerous articles online and it didnt help my cause, please anyone who can kinldy help as this is urgent.

PCI NVMe mdadm RAID1 performance too slow

Posted: 05 Dec 2021 10:30 AM PST

I know this has been discussed multiple times but I have not found any solution so far that worked so posting here hoping there is some solutions in Dec 2021…

I have a Dell R640 server with dual Xeon Gold processors and 384gb ram. The chassis is only sata/sas drive (does not support u.2) and don't have budget for new server that supports u.2.

Note - my use case is to provide storage for VM to take advantage of NVMe speeds.

So we opted for PCI card - Dell SSD NVMe M.2 PCI-e 2x Solid State Storage Adapter Card 23PX6 NTRCY. It supports 2 NVME drives and connects via bifurcation to both as x4 PCI lane.

I have two Kingston 2TB nvme drives and I created mdadm based RAID1.

The write performance of a single nvme ssd is 1800MBps. But the RAID1 has write speed of 500MBps.

I found that the Bitmap= Internal was possible problem, and I applied

mdadm <dev> --grow --bitmap=none  

Even after this the performance is nearly the same.

Any suggestions on what else I can try?


So I am not sure what happened - today when I ran the speed test again, the speed is within expectations -- Read of 1039 MBps and write of 1352MBps (using crystaldiskmark on a VM on this host)

           mdadm --detail /dev/md0  /dev/md0:             Version : 1.2       Creation Time : Sun Nov 28 19:08:22 2021          Raid Level : raid1          Array Size : 1953381440 (1862.89 GiB 2000.26 GB)       Used Dev Size : 1953381440 (1862.89 GiB 2000.26 GB)        Raid Devices : 2       Total Devices : 2         Persistence : Superblock is persistent         Intent Bitmap : Internal           Update Time : Thu Dec  2 10:33:50 2021               State : clean      Active Devices : 2     Working Devices : 2      Failed Devices : 0       Spare Devices : 0    Consistency Policy : bitmap                  Name : server1:0  (local to host server1)                UUID : 69bab65f:9daa6546:687fc567:bd50164a              Events : 26478        Number   Major   Minor   RaidDevice State         0     259        2        0      active sync   /dev/nvme0n1p1         1     259        3        1      active sync   /dev/nvme1n1p1  

Apache: restrict serving images to authenticated users

Posted: 05 Dec 2021 12:03 PM PST

I'm trying to figure out a way to restrict access to a media folder in my apache config. The folder takes uploads from from a Django site and image/pdf uploads are displayed in the site to authenticated users. The problem is, that any unauthenticated schmo can navigate to mysite.com/media/images/pic1.jpg. This shouldn't be possible; I've tried a few things to restrict this behavior, but I think I need a pointer or two.

first try : XSendfile

Xsendfile seemed to work, but it (as the name suggests) sends the file for download, then my page that's supposed to display images doesn't load. So it seems this isn't what I need for my usecase.

second try : rewrite rule

I added some rewrite rules to the apache config:

RewriteCond "%{HTTP_REFERER}" "!^$"  RewriteCond "%{HTTP_REFERER}" "!mysite.com/priv/" [NC]  RewriteRule "\.(gif|jpg|png|pdf)$"    "-"   [F,NC]  

All the parts of the site that requires authentication are behind the /priv/ path, so my idea was that if this works then navigating to /media/images/pic1.jpg would be rewriten. But this didn't work either mysite.com/media/images/pic1.jpg still shows the image.

third try : environment

I tried something similar with an environment inside the virtualhost:

<VirtualHost *:80>      ...      SetEnvIf Referer "mysite\.com\/priv" localreferer      SetEnvIf Referer ^$ localreferer      <FilesMatch "\.(jpg|png|gif|pdf)$">          Require env localreferer      </FilesMatch>      ...  </VirtualHost>  

But this also didn't work; I can still navigate directly to the image.

fourth try : Require valid-user

I added Require valid-user to the v-host, but I can't figure out how to check it against the Django user model. This after this change, I would get a prompt to log in each time I loaded a page which displays images (but w/out htaccess etc, there's nothing to auth against and no images are displayed on the site.

I then tried to implement what is described here (https://docs.djangoproject.com/en/3.2/howto/deployment/wsgi/apache-auth/), but my django project doesn't like WSGIHandler (as opposed to the default get_wsgi_application()). I get a raise AppRegistryNotReady("Apps aren't loaded yet.") error. It seems like this might be the most reasonable approach, but I don't know how to get the WSGIHandler working, or the approach working with the get_wsgi_application().

I'm aware that I could give the files a hard-to-guess uuid-like name, but this seems like a half-assed solution. So, what's my best strategy to restrict access to the media folder so that these images are only linked within the part of the site where users are authenticated?

Ubuntu 20.04, Apache 2.4

| Edit, following some advice |

auth.py

def check_password(environ, username, password):      print("---->>>---->>>---->>>---->>>---->>> check_password() has been called  <<<----<<<----<<<----<<<----<<<----")        return True    #from django.contrib.auth.handlers.modwsgi import check_password  

vhost:

<VirtualHost *:80>      ServerName mysite.com      ServerAlias mysite.com      DocumentRoot /path/to/docroot/            Alias /static/ /path/to/docroot/static/        # Not sure if I need this      Alias /media/ /path/to/docroot/media/        <Directory /path/to/docroot/static/>          Require all granted      </Directory>        <Directory /path/to/docroot/media/>          Require all granted      </Directory>        # this is my restricted access directory      <Directory /path/to/docroot/media/priv/>          AuthType Basic          AuthName "Top Secret"          AuthBasicProvider wsgi          WSGIAuthUserScript /path/to/docroot/mysite/auth.py          Require valid-user      </Directory>        <Directory /path/to/docroot/mysite/>          <Files "wsgi.py">              Require all granted          </Files>      </Directory>        WSGIDaemonProcess open-ancestry-web python-home=/path/to/ENV/ python-path=/path/to/docroot/ processes=10 threads=10      WSGIProcessGroup mysite-pgroup      WSGIScriptAlias / /path/to/docroot/mysite/wsgi.py        LogLevel trace8      ErrorLog "|/bin/rotatelogs -l /path/to/logs/%Y%m%d-%H%M%S_443errors.log 30"      CustomLog "|/bin/rotatelogs -l /path/to/logs/%Y%m%d-%H%M%S_443access.log 30" combined  </VirtualHost>  

What's the best means to clone the root linux hard drive to a new drive?

Posted: 05 Dec 2021 09:05 AM PST

I bought a new enterprise grade SSD to replace my Samsung SSD. It was $500, not cheap. Anyway, the purpose is to have a more reliable and longer-lasting root hard drive, since a non-enterprise drive is effectively a ticking time bomb.

My system is CentOS 7 x64. My server is a dedicated server (not a virtual machine or VPS).

My server is live. It would take more time than I have to set up a new drive from scratch and copy everything over, because the last time it took me weeks, and I didn't also have a full time job like I do now. So it isn't feasible.

Instead I want to clone the drive to the new one, which is larger than the old one.

Although my server is live, I am willing to take it offline for a couple hours if need be to do the clone. But the less time it's offline the better, of course. I have a lot of sites on there.

How can I clone the old drive to the new drive? What do I use to do the clone?

I cannot go in person, it's way too far away. I also do not have access to a windows system.

I could dd the whole live drive to the new one, but I read online that there could be errors if I tried that. They said it's best to take it offline and mount it as read only first and then clone the drive to the new one.

So, how exactly do I do this? Is dd the right solution? Do I mount both onto a second server so I can ssh into a second server to mount the two drives? Or what is the best way in this scenario?

GitLab Runner failing to register after migration to new cluster

Posted: 05 Dec 2021 08:27 AM PST

I have GitLab installed in Kubernetes with their Helm chart.

I migrated my old Gitlab deployment from one cluster to another with the following steps:

  • Scale down all pods in old cluster
  • Apply values.yml with helm to new cluster (to create PVCs)
  • Scale down all pods in new cluster
  • Change DNS records, HAProxy, etc
  • Manually rsync data from old PVCs to new PVCs (minio, gitaly, redis, postgres, prometheus)
  • Run helm upgrade to bring deployments back online in new cluster

After all that the deployment for the most part works fine. Able to login and use git.

But the runner is failing to register, so I can't run any CI. Looking at the gitlab-gitlab-runner pod, I see the message below repeated over and over:

Registration attempt 30 of 30  Runtime platform                                    arch=amd64 os=linux pid=691 revision=3b6f852e version=14.0.0  WARNING: Running in user-mode.  WARNING: The user-mode requires you to manually start builds processing:  WARNING: $ gitlab-runner run  WARNING: Use sudo for system-mode:  WARNING: $ sudo gitlab-runner...     ERROR: Registering runner... failed                 runner=y6ixJoR1 status=500 Internal Server Error  PANIC: Failed to register the runner. You may be having network problems.  

As you can see, it's failing to register the runner. Trying to go to /admin/runners gives me a 500 error.

Where can I see more information as to why I am getting this 500 error?

How to set up an XPRA proxy server for multiple users, reverse proxied with Apache

Posted: 05 Dec 2021 08:36 AM PST

On an Ubuntu 20.04 LTS system I would like to configure the "XPRA proxy server" in the following manner:

  1. XPRA should be accessed via its HTML5 client so that users don't need XPRA on their machines.
  2. Users shall be able to connect to XPRA via the URL https://xpra.example.net, instead of http://example.net:14500. I.e. the XPRA HTML service should be reverse-proxied by the (Apache) webserver so that HTTPS requests to the xpra.example.net virtual host get forwarded to localhost:14500. I could not find any descriptions of how to do this: most likely one must use websockets.
  3. Each user shall get access to his/her graphical desktop. I figured out that if I start on the server XPRA manually as xpra start-desktop --bind-tcp=0.0.0.0:14500 --start-child=startlxde (for the LXDE desktop) then I can indeed connect via the HTML5 client using http://example.net:14500 but this quickly gets clumsy as one needs to SSH into the server first to run the xpra start-desktop command there.
  4. The proxy server shall be started/stopped as a system service. There is indeed a /lib/systemd/system/xpra.service service file but I am not sure whether it is correctly configured for these requirements above.

I have tried my best to figure all this out from the XPRA documentation but failed. Has anyone succeeded in setting up XPRA this way? If yes, any help would be much appreciated.

PS I would have liked to tag this question with "xpra" but don't have enough reputation to do so.

SELINUX sysadm_u and SSH - Unable to get valid context for username

Posted: 05 Dec 2021 11:04 AM PST

Once I set user usernameto sysadm_u they are no longer able to login via SSH and receive the error: Unable to get valid context for username

Commands

semanage login -m -s sysadm_u username  semanage login -a -s sysadm_u username  restorecon -RF /home/username  

When I run ausearch -m AVC -m USER_AVC -m SELINUX_ERR I get the following:

type=AVC msg=audit(1590015667.658:4996): avc:  denied  { noatsecure } for  pid=7424 comm="bash" scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:unconfined_t:s0-s0:c0.c1023 tclass=process permissive=0  type=AVC msg=audit(1590015667.658:4996): avc:  denied  { siginh } for  pid=7424 comm="bash" scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:unconfined_t:s0-s0:c0.c1023 tclass=process permissive=0  

For here I am lost as how to fix this.

Command line with php-fpm

Posted: 05 Dec 2021 11:04 AM PST

How to run command line with a php-fpm? I can run :

php /home/some_script.php  

But it runs the last version of php. I have several versions installed and configured for each virtualhost, I need to make crontabs command line to run php script but not with same php version... Is there a command line like :

php5.6-fpm /home/some-script.php  

?

The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB

Posted: 05 Dec 2021 11:32 AM PST

My environment:

# uname -a  Linux app11 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04) x86_64 GNU/Linux  #   # cat /etc/*release  PRETTY_NAME="Debian GNU/Linux 9 (stretch)"  NAME="Debian GNU/Linux"  VERSION_ID="9"  VERSION="9 (stretch)"  ID=debian  HOME_URL="https://www.debian.org/"  SUPPORT_URL="https://www.debian.org/support"  BUG_REPORT_URL="https://bugs.debian.org/"  #   

while trying to run apt-get update, I get bunch of errors:

# apt-get update  Ign:1 http://deb.debian.org/debian stretch InRelease  Hit:2 http://security.debian.org stretch/updates InRelease  Hit:3 http://deb.debian.org/debian stretch-updates InRelease               Hit:4 http://deb.debian.org/debian stretch-backports InRelease             Hit:5 http://deb.debian.org/debian stretch Release   Get:6 http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease [6,377 B]  Ign:7 https://artifacts.elastic.co/packages/6.x/apt stable InRelease  Hit:8 https://artifacts.elastic.co/packages/6.x/apt stable Release  Get:9 http://packages.cloud.google.com/apt google-compute-engine-stretch-stable InRelease [3,843 B]  Get:10 http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-stretch InRelease [3,876 B]  Hit:11 https://download.docker.com/linux/debian stretch InRelease  Err:6 http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease    The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  Err:9 http://packages.cloud.google.com/apt google-compute-engine-stretch-stable InRelease    The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  Err:10 http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-stretch InRelease    The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  Fetched 6,377 B in 0s (7,132 B/s)  Reading package lists... Done  W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt google-compute-engine-stretch-stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  W: Failed to fetch http://packages.cloud.google.com/apt/dists/cloud-sdk-stretch/InRelease  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  W: Failed to fetch http://packages.cloud.google.com/apt/dists/google-compute-engine-stretch-stable/InRelease  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  W: Failed to fetch http://packages.cloud.google.com/apt/dists/google-cloud-packages-archive-keyring-stretch/InRelease  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB  W: Some index files failed to download. They have been ignored, or old ones used instead.  #   

Please advise.

mdadm RAID6, recover 2 disk failure during reshape

Posted: 05 Dec 2021 10:06 AM PST

I was reshaping my array from 10 disks to 11 to a degraded state (drive I want to add already has data on it, but nowhere to put it). 2 drives disconnected while it was running (power issue).

Is it still possible to recovery this array?

After power cycling them I was unable to add them to the array again:

mdadm: /dev/md0 has failed so using --add cannot work and might destroy  mdadm: data on /dev/sdX1.  You should stop the array and re-assemble it.  

Since rebooting, I've tried:

--assemble, fails due to "faulty" disks

--assemble --force, fails:

md: sdl1 does not have a valid v1.2 superblock, not importing!  md: sdk1 does not have a valid v1.2 superblock, not importing!  md/raid:md0: not enough operational devices (3/11 failed)  md/raid:md0: failed to run raid set.`  

I've been reading the RAID Recovery article, but so far not been successful.

mdadm --create --chunk=64 --size=1953512448 --assume-clean --level=6 --raid-devices=11 /dev/md0 /dev/sd{f,h,e,g,m,i,k,l,n,d}1 missing, fails:

mdadm: /dev/sdf1 is smaller than given size. 1953512256K < 1953512448K + metadata  

for all drives. My argument to --size is "Used Dev Size / 2" from mdadm --examine /dev/sdf1. I've downgraded mdadm to each version down to v3.1.2 (when default metadata was changed to 1.2, I know I never specified it manually).

Removing --size, I can create the array, but not mount:

XFS (md0): Mounting V4 Filesystem  XFS (md0): Log inconsistent (didn't find previous header)  XFS (md0): failed to find log head  XFS (md0): log mount/recovery failed: error -5  XFS (md0): log mount failed  

Info

My mdadm --detail before reshape:

/dev/md0:          Version : 1.2    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6       Array Size : 15628099584 (14904.12 GiB 16003.17 GB)    Used Dev Size : 1953512448 (1863.01 GiB 2000.40 GB)     Raid Devices : 10    Total Devices : 10      Persistence : Superblock is persistent      Intent Bitmap : Internal        Update Time : Wed Jun 17 14:16:09 2015            State : clean   Active Devices : 10  Working Devices : 10   Failed Devices : 0    Spare Devices : 0             Layout : left-symmetric       Chunk Size : 64K               Name : ubuntu:0             UUID : 70485ad1:0f5f2362:e8f5489a:577ac908           Events : 6037532        Number   Major   Minor   RaidDevice State         0       8       81        0      active sync   /dev/sdf1         9       8      177        1      active sync   /dev/sdl1        12       8       65        2      active sync   /dev/sde1         3       8       97        3      active sync   /dev/sdg1         4       8      145        4      active sync   /dev/sdj1         6       8      193        5      active sync   /dev/sdm1         7       8      113        6      active sync   /dev/sdh1         8       8      129        7      active sync   /dev/sdi1        10       8      161        8      active sync   /dev/sdk1        11       8       49        9      active sync   /dev/sdd1  

And mdadm --examine after failure and reboot with all disks visible again:

Device paths have changed as there was a hotswap disk added before reshape started

/dev/sdd1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)      Data Offset : 2048 sectors     Super Offset : 8 sectors            State : clean      Device UUID : 329fc32d:e9cf2ff4:3aa6c9a0:500aa445    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:46:34 2015         Checksum : 904d0c9c - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 9     Array State : A.AAA...AA. ('A' == active, '.' == missing)      /dev/sde1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)      Data Offset : 2048 sectors     Super Offset : 8 sectors            State : clean      Device UUID : e59303ea:e613013e:ef8af657:1fc6ccab    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:46:34 2015         Checksum : b3b3f659 - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 2     Array State : A.AAA...AA. ('A' == active, '.' == missing)      /dev/sdf1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)      Data Offset : 2048 sectors     Super Offset : 8 sectors            State : clean      Device UUID : 6aa0f9d8:e7b0cc66:d2f2a600:ef305279    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3148373376 (3002.52 GiB 3223.93 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:46:34 2015         Checksum : 3beac20c - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 0     Array State : AAAAAAAAAA. ('A' == active, '.' == missing)      /dev/sdg1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)      Data Offset : 2048 sectors     Super Offset : 8 sectors            State : clean      Device UUID : 4b1d87a9:16027400:df71810f:3ce53c50    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:46:34 2015         Checksum : 91a563ea - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 3     Array State : A.AAA...AA. ('A' == active, '.' == missing)      /dev/sdh1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)    Used Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)      Data Offset : 262144 sectors     Super Offset : 8 sectors            State : clean      Device UUID : 27c8fefa:8b2b74a2:9a456d34:d1a60c20    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:29:09 2015         Checksum : ee4ae103 - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 1     Array State : AAAAAA..AA. ('A' == active, '.' == missing)      /dev/sdi1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)      Data Offset : 2048 sectors     Super Offset : 8 sectors            State : clean      Device UUID : bebc3764:9e582fe8:01de9766:2d8c452b    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:29:09 2015         Checksum : 6632686d - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 5     Array State : AAAAAA..AA. ('A' == active, '.' == missing)      /dev/sdk1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)    Used Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)      Data Offset : 262144 sectors     Super Offset : 8 sectors            State : clean      Device UUID : 986d9f31:3a74b90d:7800779e:31607539    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3148373376 (3002.52 GiB 3223.93 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:24:09 2015         Checksum : de0a23b - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 6     Array State : AAAAAAAAAA. ('A' == active, '.' == missing)      /dev/sdl1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)    Used Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)      Data Offset : 262144 sectors     Super Offset : 8 sectors            State : clean      Device UUID : a5f4ac69:f6bbac94:60c1b790:db2c223e    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:28:58 2015         Checksum : c9909fb9 - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 7     Array State : AAAAAA.AAA. ('A' == active, '.' == missing)      /dev/sdm1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)      Data Offset : 2048 sectors     Super Offset : 8 sectors            State : clean      Device UUID : 938d9190:582eecf8:b9157fce:38705df2    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:46:34 2015         Checksum : d2462ecd - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 4     Array State : A.AAA...AA. ('A' == active, '.' == missing)      /dev/sdn1:            Magic : a92b4efc          Version : 1.2      Feature Map : 0x5       Array UUID : 70485ad1:0f5f2362:e8f5489a:577ac908             Name : ubuntu:0    Creation Time : Fri Jan 27 19:20:36 2012       Raid Level : raid6     Raid Devices : 11     Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)       Array Size : 17581612032 (16767.13 GiB 18003.57 GB)      Data Offset : 2048 sectors     Super Offset : 8 sectors            State : clean      Device UUID : 289f68f6:f43d8a40:2203e21c:e6cff371    Internal Bitmap : 2 sectors from superblock    Reshape pos'n : 3196923264 (3048.82 GiB 3273.65 GB)    Delta Devices : 1 (10->11)        Update Time : Wed Jun 17 19:46:34 2015         Checksum : 4db49d1a - correct           Events : 6039833             Layout : left-symmetric       Chunk Size : 64K       Device Role : Active device 8     Array State : A.AAA...AA. ('A' == active, '.' == missing)  

Delegate session management on RemoteApp 2012

Posted: 05 Dec 2021 10:06 AM PST

We've built a rather large RemoteApp environment on 2012 R2, fully patched. Everything is working fine, so now comes the time to offshore and delegate tasks to the first line team.

We would like to be able to have our first line guys manage the sessions. If, for example, a session would hang (lost connection to the profile drive). They should be able to log off the session.

I've tried setting permissions like this on all servers:

wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TSPermissionsSetting WHERE (TerminalName="RDP-Tcp") CALL AddAccount "ADMIN\MyGroupWithPeopleManagingTheTS",2  

But to no avail, they can't open Server Manager > Remote Desktop Services, because they can't connect to the RD Connection Brokers.

If they open up task manager and try logging off users there, they don't have the appropriate rights. This option is also not the best because it would require them to go and look on each server if the user is logged on there (auto load balanced across multiple servers and regions).

So, basically: How can members of a certain group log users off, without giving them admin permissions on the machine?

This is how I would do it on 2008, but the tools are no longer available: https://technet.microsoft.com/en-us/library/cc753032.aspx

gnupg 'libgpg-error.so.0 no version information available'

Posted: 05 Dec 2021 09:48 AM PST

I'm trying to compile gnupg-2.1.0 for Debian Wheezy, I've downloaded and compiled the required libraries (libgpg-error-1.17, libgcrypt-1.6.2, libksba-1.3.2, libassuan-2.1.3, and pth-2.0.7 in that order) via ./configure, make, make install. I then added /usr/local/lib to /etc/ld.so.conf and then ran ldconfig so that gnupg could find the libraries.

Gpupg compiles fine but upon attempting to run either ./agent/gpg-agent or ./g10/g I am greated with this error:

alpha@virtual:~/gnupg-2.1.0$ ./agent/gpg-agent --version  ./agent/gpg-agent: /lib/i386-linux-gnu/libgpg-error.so.0: no version information available (required by ./agent/gpg-agent)  ./agent/gpg-agent: /lib/i386-linux-gnu/libgpg-error.so.0: no version information available (required by /usr/local/lib/libgcrypt.so.20)  ./agent/gpg-agent: relocation error: ./agent/gpg-agent: symbol gpgrt_set_alloc_func, version GPG_ERROR_1.0 not defined in file libgpg-error.so.0 with link time reference  

ldd ./agent/gpg-agent produces

root@virtual:/home/alpha/gnupg-2.1.0# ldd ./agent/gpg-agent  ./agent/gpg-agent: /lib/i386-linux-gnu/libgpg-error.so.0: no version information available (required by ./agent/gpg-agent)  ./agent/gpg-agent: /lib/i386-linux-gnu/libgpg-error.so.0: no version information available (required by /usr/local/lib/libgcrypt.so.20)          linux-gate.so.1 =>  (0xb7750000)          libgcrypt.so.20 => /usr/local/lib/libgcrypt.so.20 (0xb7698000)          libgpg-error.so.0 => /lib/i386-linux-gnu/libgpg-error.so.0 (0xb7694000)          libassuan.so.0 => /usr/lib/i386-linux-gnu/libassuan.so.0 (0xb7681000)          libnpth.so.0 => /usr/local/lib/libnpth.so.0 (0xb767d000)          libpthread.so.0 => /lib/i386-linux-gnu/i686/cmov/libpthread.so.0 (0xb7664000)          libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb74ff000)          librt.so.1 => /lib/i386-linux-gnu/i686/cmov/librt.so.1 (0xb74f6000)          /lib/ld-linux.so.2 (0xb7751000)  

Why is this error occuring and how could I fix it?

Resolution

Resolved by adding /usr/local/lib to LD_LIBRARY_PATH via export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH} and echo /usr/local/lib > /etc/ld.so.conf.d/local.conf

Mysterious login attempts to windows server

Posted: 05 Dec 2021 09:03 AM PST

I have a Windows 2008R2 server that is reporting failed login attempts from a number of workstations on our network. Some event log details:

Event ID 4625, Status: 0xc000006d, Sub Status: 0xc0000064
Security ID: NULL SID, Account Name: joedoe, Account Domain: Acme
Workstation Name: WINXP1, Source Network Address: 192.168.1.23, Source Port: 1904
Logon Process: NtLmSsp, Authentication Package: NTLM, Logon Type: 3 (network)

I believe this is coming from some netbios service or similar (maybe the file explorer), keeping an inventory of its network neighborhood and also trying to authenticate.

Is there a way to turn this off without having to turn off file sharing all together? In other words, clients authenticating against file servers that they use is of course no problem, but I want to eliminate clients trying to authenticate to servers that they are not using and have no business with. The above example is only one of thousands of log alerts for similar failed network authentications.

What can I do to clean this up / handle this?

Working around the stale pidfile problem after hard restart kills my daemon

Posted: 05 Dec 2021 09:56 AM PST

I'm using Red Hat Linux (RHEL5) on a (VMWare) VM. I've written a daemon which should stay running all the time and automatically run on boot.

Last night the VM host had an unrecoverable hardware problem and the VM abruptly halted. When it came back, my daemon didn't start because the pidfile still existed.

Apparently this is called The Stale pidfile Syndrome but I'm not sure what's the best long-term approach for mitigating it. I'm thinking that the startup script in /etc/rc.d* should delete the pidfile before starting the daemon, but the service management script in /etc/init.d should remain the same so things like service mydaemon start doesn't clobber the pidfile.

/etc/rc.d/rc6.d just has a symlink to the script in /etc/init.d/, so how should I change how it behaves only on boot? I can make an additional script with higher precedence in the rc.d dirs, but it seems hacky. Someone also suggested adding logic like "if uptime is less than 1 minute, delete the pidfile" but that seems hacky too.

Any thoughts or solutions or best practices?

SCO UNIX problem: "Cannot create /var/adm/utmp or /var/adm/utmpx"

Posted: 05 Dec 2021 09:03 AM PST

Hey everyone, I have an old server that doesn't boot. I don't know the version of unix installed, but I see SCO UNIX. It stops with that error:

UX:init: ERROR: Cannot create /var/adm/utmp or /var/adm/utmpx  UX:init: ERROR: failed write of utmpx entry: "   "  UX:init: ERROR: failed write of utmpx entry: "   "  UX:init: INFO: SINGLE USER MODE  

After that message, it just stops. I cannot write or press anything. Even CTRL + ALT + DEL does not work.

I cannot get into the system. I have tried booting with a DamnSmallLinux LiveCD but it does not recognize the file system on HDA.

Is there a way to either log in as root or bypass this error?

Thanks.

No comments:

Post a Comment