Saturday, April 10, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How can I set-up a domain in Azure and use Exchange Server services there?

Posted: 10 Apr 2021 06:49 PM PDT

I'm currently raising a startup company that's using an ISP's services. All we have at this time is IMAP for e-mail. And we're using JIRA/Confluence.

I want to migrate us to a comprehensive infrastructure, using domain accounts for our Windows 10x64 machines, e-mails, shared calendars, shared contacts (i.e. Exchange Server features), Azure DevOps, MS Teams, SharePoint etc.

I did some research, but I couldn't find any information explaining on how to do that, particularly regarding using Outlook 2019 with Azure. When it's about Azure AD, I only find information on how to synchronize a local domain to Azure, but no information on how to easily utilize Azure as a replacement for a domain controller.

As a start, I would need to know what to do in order to create a domain on Azure (i.e. using Azure as a domain controller), migrate our domain name to Azure AD and to get Exchange Server features running in Azure, so we can use Outlook 2019 on their machines.

Kubernetes: Service connection timeout

Posted: 10 Apr 2021 05:58 PM PDT

I'm setting a lab cluster with 3 nodes (1 master, 2 workers) in 3 different networks and connected by VPN. I used Flannel for Pod network.

NAME          STATUS   ROLES                  AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME  vinet         Ready    <none>                 8m44s   v1.21.0   10.200.0.48    <none>        Ubuntu 18.04.4 LTS   4.15.0-140-generic   docker://20.10.5  vm-150        Ready    control-plane,master   24m     v1.21.0   10.200.0.150   <none>        Ubuntu 18.04.5 LTS   4.15.0-128-generic   docker://20.10.5  vultr.guest   Ready    <none>                 8m47s   v1.21.0   10.200.0.124   <none>        Ubuntu 18.04.5 LTS   4.15.0-132-generic   docker://20.10.5  

My config includes a helloworld app (targetPort=8080, replicas=10) and a associated service (nodePort=30001). Everything fine while pods distributed in only 1 node, I can reach API endpoint by issueing: curl localhost:30001, it loaded balance as expected.

But when pods spread to 2 worker nodes, it was timeout when the request forwarded to pods in other node. For example, if I was in Node 1 and curl localhost:30001, I got the following:

root@mysamplehost:~# curl localhost:30001  You've hit hello-deploy-6575485494-snb5k  root@mysamplehost:~# curl localhost:30001  You've hit hello-deploy-6575485494-pqbwd  root@mysamplehost:~# curl localhost:30001  You've hit hello-deploy-6575485494-pjfl6  root@mysamplehost:~# curl localhost:30001  curl: (7) Failed to connect to localhost port 30001: Connection timed out  

My sample deploy:

apiVersion: apps/v1  kind: Deployment  metadata:    name: hello-deploy    spec:    replicas: 1    selector:      matchLabels:        app: hello-world    minReadySeconds: 10    strategy:      type: RollingUpdate      rollingUpdate:        maxUnavailable: 1        maxSurge: 1    template:      metadata:        labels:          app: hello-world      spec:        containers:        - name: hello-pod          image: ngocchien/chien_test:1.0.0          ports:          - containerPort: 8080  ---  apiVersion: v1  kind: Service  metadata:    name: hello-svc    labels:      app: hello-world  spec:    type: NodePort    ports:    - port: 8080      targetPort: 8080      nodePort: 30001      protocol: TCP    selector:      app: hello-world    

postfix limit sending rate PER MX domain?

Posted: 10 Apr 2021 04:48 PM PDT

I know postfix can limit concurrent connections to recipient "domains", but I am sending emails to websites from different domains that may use the same email service (Google, Outlook, etc.) for the MX domains. Is there a way to limit the sending rate by the MX domain as opposed to the domain of the email address?

Docker Swarm, how to access private services on overlay network from an external client

Posted: 10 Apr 2021 04:34 PM PDT

I'm building a micro-service system based on Docker Swarm. Some of these services must be accessible for company's internal use only, like administrative dashboards, DBs, etc. Services are interconnected by Docker's overlay networks, and only public ports are published to Internet.

What I want to do is to create a simple admin backend overlay network, where any administrative service is connected, and to add an ingress VPN gateway on it, where any authorized client from Internet can connect and access private internal services, like if the client was directly connected on the same overlay network.

What I thought was about to install an OpenVPN server with a container on the net, I've read a bit around but I wasn't able to find clear information on this way. I've found people talking about issues configuring the routing tables, and configuring the DNS for use the docker's one, for be able to resolve services' names instead of machines IPs.

I'm still learning Docker, and I'm asking what is best pattern in these cases. How can I restrict access to an overlay network to only authorized external clients, and make them able to access services on the same net?

Persistent storage in captive portals or mobile internet while wifi connection without internet

Posted: 10 Apr 2021 04:17 PM PDT

I have a little problem here:
I want to create a WiFi network, which does not provide internet access. The WiFi network should only serve one website to the user like in a museum.
That's pretty simple right? But here comes the hard part:
I need to store persistent cookies (to save the user's answers and display them back later to the user) AND the mobile internet connection should NOT be interrupted (because the WiFi does not offer an internet connection). Why is this a problem?
I you connect to a WiFi with a mobile device, regardless if you have internet access or not, the mobile connection will be cut. There is only one exception: The internet is blocked (or at least seems blocked) by a captive portal. But if you use the captive portal "browser" you can not store persistent cookies. If the "browser" is closed, all is lost.
So how can I solve that? Sure, the user can use the "normal" browser, but as far as I know this is not possible on iOS. If you open Safari, the "captive portal browser" is raised over Safari...

Would be great, if there is a way to do that......

Implicitly allow requests in IIS from valid hostname

Posted: 10 Apr 2021 05:21 PM PDT

I have a few publicly accessible IIS servers and sites (personal and corporate), these hosts have own domains/subdomains, and all legit access to these https sites happen through domains.

Almost all HTTP app vulnerability scans from bots/rooted servers happen to the servers through IP, without valid hostname, and if there is hostname it is the default reverse DNS host, not the actual domain of the site.

Is there a way in IIS to implicitly only allow requests with proper hostname? The site root app only has bindings to the hostname, but IIS still accepts requests, and responds with 404. The best thing would be to timeout the request similar fashion as if the site doesn't have HTTP open.

I of course understand that this does not guarantee anything in security wise, the scanner can still figure out the proper hostname in many ways, but it would still filter out 90% of dummy scans.

IPS in firewall can probably do some things, but in some cases I do not have that luxury. Is there way in IIS? Redirect the http request to oblivion? (this would probably just change the error to proxy gateway http errors?)

route ipv4 to ipv6 as mechanism to overcome not owning an ipv4 block for load balancing purposes on premise k8s (none aws/gcp)

Posted: 10 Apr 2021 09:12 PM PDT

This is not a question about tunnelling, although that may be part of a solution.

With public cloud providers it's trivial to request a load balancer due to providers owning large class A/B/C public IPv4 blocks. However, whilst it's trivial to own an ipv6 block, it's non-trivial to issue load balancer addresses because you can't assume incoming traffic supports ipv6. How to bridge this gap?

Trying to achieve: Given limited ipv4 public addresses (4), instead , generate layer 7 http load balancer A records, which map to ipv4 addresses. These ip4 addresses then route to in-cluster ipv6 cluster addresses. Perhaps SNI is needed here?

Constraints: Can't assume that Ingres traffic supports ipv6, so (if possible) SNAT is needed to rewrite ipv6 -> ipv6 and back again (is this possible?), iptables , and conntrack for connection tracking?

E.g ingress  Load balancer A records   Public ipv4 address  <mapping (not tunnelling)>  Public ipv6 address range    lb[1-n].example.com  ------>  192.0.2.0/24         ---->   2001:DB8::/32  
E.g. egress  ipv6 address range        Public ipv4 address  2001:DB8::/32             ----->  192.0.2.0/24        ----> source ip ipv4 or ipv6  

https://sookocheff.com/post/kubernetes/understanding-kubernetes-networking-model/ https://kubernetes.io/docs/concepts/services-networking/dual-stack/ netfilter https://metallb.universe.tf/ https://linux.die.net/man/8/ip6tables https://community.hetzner.com/tutorials/install-kubernetes-cluster

Backup drive spindown : is it harmfull?

Posted: 10 Apr 2021 05:25 PM PDT

I plan on doing a small, Raspberry PI4 based backup server for my house using USB external drives. My plan is to have each PC (4 of them) backing up to the server automatically every month. The rest of the time the drives would not be accessed or written to. Since I want the server to be as efficient as possible (in terms of power consumption), and with the least noise possible, I was considering spinning down the RAID 1 array as long as they are not used, and basically have them spinning up once a month for the backup, and then back down.
I've been searching the internet for a while now, and I think I've found as many "its fine" answers as I've found "it'll kill your drives prematurely"...

What's your take on this ?

Password security of encrypted SSH private key: How to read round number or costfactor of bcrypt

Posted: 10 Apr 2021 09:01 PM PDT

Here https://security.stackexchange.com/a/52564 you can read that newer OpenSSH versions use bcrypt for protecting the keyfile. Security of bcrypt depends on the costfactor see https://security.stackexchange.com/questions/139721/estimate-the-time-to-crack-passwords-using-bcrypt/201965#201965

According to https://crypto.stackexchange.com/questions/58536/how-does-openssh-use-bcrypt-to-set-ivs/58543#58543 the default bcrypt round number would be 16. This would a good security. But how to get the round count / cost factor?

What I've done so far: Key looks like (to make it shorter here only a weak 1024 bit key)

-----BEGIN OPENSSH PRIVATE KEY-----  b3BlbnNzaC1rZXktdjEAAAAACmFlczI1Ni1jdHIAAAAGYmNyeXB0AAAAGAAAABBLF8sO2Q  hcLXI43z96e1hiAAAAEAAAAAEAAACXAAAAB3NzaC1yc2EAAAADAQABAAAAgQC0gBWeZpej  9ILT/59bEb0/lSvXx0WfZqP2lXRDbuY+gluuWyT+REQcVTR2BxSx9F/P20mLTnupzY+XE3  xEu+SIJlwKIAH3fed62+QBzDrPsl9kyfoIGIvi/28ZftqVN/kg0GSOaAqu4Px+vNVX1VKn  PNV5VVCZWL4ZPlGQZ48UJwAAAhCwDkueKT9oq8E0qtD92/4DSAD2eTI7bd6jBGUxugEw85  6xWbRYnFQZdwO2ZCNV0aTHViD1FRKlC9cBHDoSORKcM/9dY9Msy6lZj7Tp5s8r7x2pOrJi  TVRbv5/cI732I+l/vYvssJEhZpeSw4JKh9tyPpifVmzBxqtqwkBrTuLCMqkwLmrcxReFUq  aA/RIZy3L616CJsAvx2ezEc49D6SbJ9i9OlKuv73a1baS4RpMvFzWGLE2NBvvtQpEnJFoL  Kyjz+two4doT6SZ7UtiVGyCtO5WQEoeAgjhkbZzOPtM2AvoV+hNLRIX2/52jOB5A1bNQ0v  qW64aj2YNe8vWfj5xtA/8BlyEG7gwhu+0HgbgMDxw7o/0qVkHM/Hv3YgBTRygsH+8h4wsR  kxA292NOKKaD18tv1j3atR80q0XQVcQH20uX8tSqXtKfDtkUc/EPbFCNp3xQJG/F81USKh  YAmjxEeDkZZ/LkEOEJKvFRCL3gFlH4rqF5/pRk6HmB99xceD4irbazm+BWfPAf5Q0zdB5L  /yei3sqA4G48yRXIkaELtYNEeTYHMp3PGz1b3CP3l+ZGZp6XNaM+sMfdICbI3Zae5bnxKg  VXEE2UMdi7DEXbqEzSlfcIf5QzXHMQJm0ZL+iLoaEmakamAxCKk6jJ+QzHzGADZEIRXrj3  5Nhhd0jsToEMsXmmawt2qxy0cIHET1M=  -----END OPENSSH PRIVATE KEY-----  

PW is test

Then lets decode the base64. Therefore first and last line beginning with '-----' have to be removed

cat key | tail -n +2 | head -n -1 | base64 -d > text.txt

Now open text.txt e.g. in Notepad++ This shows enter image description here

but now I have no idea how to read the roundcount from there. Can you assist?

SQL Server Linked Server readonly as security measure

Posted: 10 Apr 2021 09:06 PM PDT

I've got a configuration with to separate VLANs. In one of te VLANs, a Microsoft SQL server is running and doing it's thing. I'd like to read (and only read) data from this server from the second VLAN. My solution is to create a DMZ and run a new SQL Server instance with a linked server in it. Is it possible for this linked server to act as a Read Only gateway between the two networks?

So for example:

  VLAN 1 (angry outside world)  |     VLAN 2 (DMZ)      |    VLAN 3 (secured zone)  ----------------------------------------------------------------------------------------                                  |  __________________   |    ______________________                                  | | SQL Server with  |  |   |                      |  Office applications <-----SQL---->| linked server    |<-SQL-|      SQL Server      |  and evil people                 | |__________________|  |   |______________________|                                  |                       |  

Thus allowing the server in the DMZ to act as a gateway. Is this the way linked servers are intented? And if so, is this intended to be used to provide some layer of security? If not, what would be a better solution?

How to set up a secure system to allow only specific clients to access specific services on a server?

Posted: 10 Apr 2021 09:07 PM PDT

Preface

Please bear with me if I use incorrect terminology or don't express the problem too well since I'm not an expert on system administration/server maintenance. Let me know/correct me if I do so I can learn and clarify my points.

Problem

I have a Linux device, the server, that runs different services on different ports (an HTTP, SSH and FTP server currently, but possibly other servers for other protocols in the future). My friends and I have other devices, the clients, that are running Linux (incl. Android) or Windows, that we'd like to use to access the server's services, as long as both client and server are online on the internet, regardless of whether they're on the same local network.

The other caveat is that we'd like to do this securely such that the server can ensure that it's really only me or my friends accessing it, that we can be sure that the server we're connecting to really is the correct server (not some MITM spoofing their identity as the server), and without third parties being able to obtain (too much) meaningful information by sniffing the exchanged packets.

I'd also like to restrict my friends' access to only specific services (say, only HTTP and FTP for one friend, only SSH for another, etc.).

Possibly relevant information

I have admin access to the server (I can install packages and configure it with unrestricted access) and local network router. The server is running nftables.

Actions considered

I've thought of configuring nftables on the server to only allow inbound packets from specific IP addresses or devices with specific MAC addresses, but I don't think these are appropriate/adequate. First because of the constraint that we'd like to be able to connect from outside the local network, so the client devices' IP addresses can change. Second, because I know that MAC addresses can easily be spoofed so I can't use those to ensure that the client devices really are the allowed ones. Third, because these don't address the constraint that third parties shouldn't be able to obtain meaningful information by snooping on the packets (so ideally, the solution should employ some sort of cryptographic protocols to address this).

Other thoughts

I'm thinking that the solution would involve setting up some sort of accounts-based system where my friends and I each have our own accounts and the server only allows packets of specific protocols from certain accounts.

I'm also thinking I can maybe solve this by setting up a VPN server using a protocol like WireGuard or OpenVPN on the server which would only allow connections from authenticated clients in a whitelist, then route all other traffic like SSH and HTTP through the VPN tunnel, making them accessible only when connected via that tunnel. Would that work? But then I'm not sure how I'd configure that to restrict my friends' access to particular services only. Do these VPN protocols have some feature to restrict the types of traffic allowed per client?

In any case, I don't really have a clue what the optimal solution for this is and how I'd set it up, so I'd really appreciate any useful advice, suggestions and information to solve this.

All tasks in Task scheduler are going to queued state when triggered

Posted: 10 Apr 2021 07:38 PM PDT

recently we have a strange problem with scheduled tasks on Windows server 2019 with RDS role installed. 6 servers were restored from 3 months old backup, joined into the AD domain again and working as session hosts correctly, but none of the tasks in Task scheduler (which ran previously and are running on other SH's which weren't restored) is working no more.

When you run the task manually, everything is working ok, but when you set it to some time, it state turn to Queued and don't execute. We tried to create new tasks, delete all tasks and create brand new, but nothing helped. It's not a problem of task settings, so please don't advise to run new instance in parallel or something similar simple. The same settings are working on the servers which weren't restored.

We tried to look in the registry and in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\State ImageState is value IMAGE_STATE_COMPLETE and in HKEY_LOCAL_MACHINE\System\Setup\ChildCompletion\audit.exe has value 0 and oobebeldr.exe is set to 3.

Servers are configured and customers are working on them, so reinstall is the last option. Will sysprep without generalize help here? Or something else? Thank you.

Security implications of directly connecting a Windows PC to ISP via Network Adapter with Ethernet cable bypassing the Router

Posted: 10 Apr 2021 09:24 PM PDT

When diagnosing Internet connection issues (slow speed for example), an ISP technician may ask a user to connect their ISP-provided Ethernet cable directly to a device (typically a Windows PC) to run speed tests in the browser or pings, etc. (to rule out the possibility of the Router being the culprit).

What are the likely (realistic) as well as theoretical security implications in as far as getting access to the device / retrieving information from it (accessing files, etc.) under the following assumptions:

  • This is done for a short period of time ~ 30 minutes
  • The new network is identified as Public (in Windows UI)
  • Remote assistance is enabled
  • Windows built-in firewall is OFF, but third-party application-level firewall is enabled (restricts Internet access to apps).

And does this compromise saved passwords of network-mapped drives and locations (which are normally only accessible within LAN via the Router)?

ansible constructing variable name from another variable

Posted: 10 Apr 2021 03:08 PM PDT

I have a ansible variable definition, and was wondering if i can get the variables value based on the variable defined during runtime

vars:    test:      user: ""      dirs:        base: ""        logs: ""        libs: ""      region:    - name: debug    debug:      msg: "{{ newvar }}"    ansible-playbook playbook.yml -e "newvar=test"  

execution of above should print, all values defined in the 'test' variable.

Finding the IP Address of a computer (through a firewall)

Posted: 10 Apr 2021 08:06 PM PDT

I am looking for a way to find the IP address of a computer connected to a network.

The scenario is the following:

  • The computer whose IP address I want to find is connected to the network.
  • This computer uses a firewall
  • The computer sets its IP dynamically.

I have read that you can use ping and nslookup for this. First, as a test, I tried ping and nslookup with a random hostname and I got their IP address as stated here.

Then I tried, (just to test) this with a PC with a static IP. When I did

ping CompName  

I got the IP address

however whenn I tried

nslookup CompName  

I got Can't file: Server failed

Even if I can do this with ping, what happens when the target computer is behind a firewall?

Zabbix : Invalid JSON

Posted: 10 Apr 2021 06:07 PM PDT

I have a Powershell script returning a file like :

{ "data":[ { "{#SHARENAME}":"Informatique", "{#SHARENAME}":"Marketing" } ] }

I've set a discovery rule (zabbix agent) + an Item prototype (zabbix trapper) with the Key : sharename[{#SHARENAME}]

But the discovery rule says :

Invalid discovery rule value: cannot parse as a valid JSON object: invalid object format, expected opening character '{' or '[' at: '/c zabbix_sender -c "C:\Program Files\Zabbix\Configurations\ZabbixAgentConf_x64_Cu stom.conf" -i C:\Temp\JSON.log sent: 0; skipped: 8; total: 8'

Why ?!

How to redirect all Apache 2.4 websites to maintenance page while allowing access to specified IP addresses

Posted: 10 Apr 2021 08:06 PM PDT

I have two mirrored Apache 2.4 servers behind a load balancer with about 50 websites hosted on each. I need to close them for maintenance while allowing access from several specified IP addresses. During the maintenance, the maintenance.html page should be presented to the visitors. I can't close it on the load balancer (which I initially wanted), so I need to make it through Apache configuration on both servers. Does anyone know what's the most effective and the simplest method?

I've already read many similar posts but I could not find the right answer that actually works. Many thanks!

Setting a root password in an OVA and making it configurable

Posted: 10 Apr 2021 09:02 PM PDT

I have this VM and it is always created with a default root password say "RootPassword55". Now, I'd like to configure it so that the user will have to provide a new password on initial login.

I'm looking to configure this via OVF files but I couldn't get it to work.

I have this OVA. I extract this and I get an OVF and a vmdk file. The OVF refers to the vmdk. I added the password set properties (details below) in this OVF, and import it from virtual box but it doesn't seem to work.

I tried setting a property under the ProductSection element in the ovf but it doesn't seem to be picked up. I found quite a few links that say this is the right way to do it. Here's one of them - http://sflanders.net/2014/06/26/power-ovf-properties/. Scroll all the way down to password.

This is what I tried setting in the ProductSection.

<Property ovf:key="rootpw" ovf:password="TRUE" ovf:type="string" ovf:value="HelloUser" ovf:userConfigurable="TRUE">       <Label>Root Password</Label>       <Description>To set the root password</Description>    </Property>  

What I understand is, this property defaults to the password HelloUser if the user doesn't set a password while booting the VM. Also, it should ask the user to set a root password. But when I load the VM it still works with the previous default of RootPassword55 and seemingly, totally ignores my custom ovf properties. I'm not sure where this "RootPassword55" is coming from. It's not in the ovf so it's probably in the vmdk file. What am I doing wrong and how can I fix this? Thanks.

MariaDB-Server wont start after Server Reboot

Posted: 10 Apr 2021 07:04 PM PDT

I recently installed a Koha Library on Ubuntu VServer 16.04 with MariaDB 10.31. Everything ran smoothly until the Root restarted: Now I get: Software error:

DBIx::Class::Storage::DBI::catch {...} (): DBI Connection failed: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) at /usr/share/perl5/DBIx/Class/Storage/DBI.pm line 1492. at /usr/share/koha/lib/Koha/Database.pm line 100

When i try to connect to the Site. I checked instantly if MySQL is running and it doesnt. So i tried to restart it - but i get an error

mysql status:   mysql.service - LSB: Start and stop the mysql database server daemon     Loaded: loaded (/etc/init.d/mysql; bad; vendor preset: enabled)     Active: failed (Result: exit-code) since Mi 2017-10-18 20:08:06 CEST;     1min 26s ago       Docs: man:systemd-sysv-generator(8)    Process: 4640 ExecStart=/etc/init.d/mysql start (code=exited,         status=1/FAILURE)    Okt 18 20:07:36 h273239.stratoserver.net mysqld[4815]: 171018 20:07:36         [Note] InnoDB: Shutdown completed; log sequence number 19026477  Okt 18 20:07:36 h273239.stratoserver.net mysqld[4815]: 171018 20:07:36     [Note] /usr/sbin/mysqld: Shutdown complete  Okt 18 20:07:36 h273239.stratoserver.net mysqld[4815]:  Okt 18 20:07:36 h273239.stratoserver.net mysqld_safe[4850]: mysqld from pid         file /var/run/mysqld/mysqld.pid ended  Okt 18 20:08:06 h273239.stratoserver.net /etc/init.d/mysql[5123]: 0     processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf     ping' resulted in  Okt 18 20:08:06 h273239.stratoserver.net /etc/init.d/mysql[5123]: [61B blob     data]  Okt 18 20:08:06 h273239.stratoserver.net /etc/init.d/mysql[5123]: error:     'Can't connect to local MySQL server through socket '/var/run/mysqld    /mysqld.sock' (111 "Connection refused")'  Okt 18 20:08:06 h273239.stratoserver.net /etc/init.d/mysql[5123]: Check that     mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!  Okt 18 20:08:06 h273239.stratoserver.net /etc/init.d/mysql[5123]:  Okt 18 20:08:06 h273239.stratoserver.net mysql[4640]:    ...fail!  

because we are a small NGO we cant pay for professional help - so you guys are my last resort - thanks in advance!

'Could not find filesystem /dev/root' after Clonezilla clone of redhat install to newer hardware'

Posted: 10 Apr 2021 05:07 PM PDT

I am cloning what appears to be a redhat 4 (possibly 5?)server to fairly newer hardware, as the original has a failing board. DBA would rather not reconfigure a new installation so they want me to clone if possible. I used Clonezilla stable release 2.5.0-25 and did 2nd option disk to remote disk copy over network via static IPs. Used this tutorial: https://www.youtube.com/watch?v=8UGR_RLCptQ

Redhat version info:

[root@original_server ~]# cat /etc/redhat-release   redhat-4  #Enterprise Linux Enterprise Linux Server release 5 (Carthage)  

Old hardware: Asus RS260/2x Xeon E5420/12gb DDR3 ECC FB RAM (24gb prior to hardware issues)/ICP ICP5085BL RAID controller/RAID 10 8 drives Optimal

New Hardware: Asus RS720/2X Xeon 2620/48gb DDR3 ECC FB RAM/Asus PIKE 2308 RAID Controller/RAID 10 8 drives Optimal

During the process I was not asked to clone the boot loader, though the sda1 partition mounted at /boot appeared to have been cloned afterward.

Long story short it appears the clone was successful and the old data is on the new server in the correct partitions, but when I try to boot I get Unable to access resume device (LABEL=SWAP-sda5) and mount: could not find filesystem '/dev/root'. Then a few more no such file or directory errors then Kernel panic.

So far I've tried:

  • Rebuilding initrd using a CentOS 5.11 64bit DVD and following these instructions: https://wiki.centos.org/TipsAndTricks/CreateNewInitrd. When I used the $(uname -r) values as specified the command returned "No modules available for kernel "2.6.18-398.el5". I reran the command with the kernel version# that was on the existing initrd file (2.6.18-8.el5) and it worked. File was exactly the same size.

  • Installing LSI Fusion-MPT SAS2 driver for el5_3 for RAID via RPM from Asus site.

  • Deleting original initrd and rebuilding after doing RAID controller install. initrd file was only very slightly smaller (one or two bytes).

  • Getting UUIDs from Gparted for sda1, sda2, sda3, sda6 and modifying /etc/fstab with them instead of the labels.

  • Uncommenting #boot=/dev/sda in grub.conf and modifying it to boot=/dev/sda1.

  • Modifying kernel command in boot sequence (changing ro to rw, chanting root= to point to /dev/sda, /dev/sda3, and to UUID=uuid of /dev/sda3), none of which worked.

Things I haven't tried yet that I'm aware are options:

  • Reinstalling grub, but do I reinstall to /dev/sda1 (where it originally was) or /dev/sda? And how do I back up the original grub settings prior?

  • Installing the RAID controller driver from source (another thing I'm not very familiar with).

  • Running fsck: not too familiar, have ran it with -f -y options in the past but apparently you want to run it interactively so as not to break the system.

I'm guessing RAID driver issue, but I'm not sure how to get it included in initrd. If there is a better option for linux system cloning I am open to it (Partimage would not load when I tried it but I can attempt it again). Already spent three days on this so hopefully I've done my due diligence prior to asking.

Original /etc/fstab:

[root@original_server ~]# cat /etc/fstab  LABEL=/                 /                       ext3    defaults        1 1  LABEL=/boot             /boot                   ext3    defaults        1 2  devpts                  /dev/pts                devpts  gid=5,mode=620  0 0  tmpfs                   /dev/shm                tmpfs   defaults        0 0  LABEL=/main             /main                   ext3    defaults        1 2  LABEL=/opt              /opt                    ext3    defaults        1 2  proc                    /proc                   proc    defaults        0 0  sysfs                   /sys                    sysfs   defaults        0 0  LABEL=SWAP-sda5         swap                    swap    defaults        0 0  

Original /boot/grub/grub.conf:

[root@original_server ~]# cat /boot/grub/grub.conf   # grub.conf generated by anaconda  #  # Note that you do not have to rerun grub after making changes to this file  # NOTICE:  You have a /boot partition.  This means that  #          all kernel and initrd paths are relative to /boot/, eg.  #          root (hd0,0)  #          kernel /vmlinuz-version ro root=/dev/sda3  #          initrd /initrd-version.img  #boot=/dev/sda  default=0  timeout=5  splashimage=(hd0,0)/grub/splash.xpm.gz  hiddenmenu  title Enterprise Linux (2.6.18-8.el5)  root (hd0,0)  kernel /vmlinuz-2.6.18-8.el5 ro root=LABEL=/ rhgb quiet  initrd /initrd-2.6.18-8.el5.img  

TLDR: Attempted clone of redhat 4 machine to newer hardware over network using Clonezilla and got Could not find filesystem /dev/root. Made modifications to fstab and grub.conf, installed RAID driver, modified boot options, and recreated initrd and same result.

I can provide screenshots or more info if needed. Any help is appreciated, thank you.

How to build and update iptables latest version for CentOS 7

Posted: 10 Apr 2021 07:04 PM PDT

Due to a bug (similar to this one) i'm facing with iptables in Centos 7, I'd like to update the version of iptables.

# yum update iptables  Loaded plugins: fastestmirror, langpacks  Loading mirror speeds from cached hostfile   * base: mirrors.coreix.net   * epel: mirror.de.leaseweb.net   * extras: mirrors.coreix.net   * updates: mirrors.coreix.net  No packages marked for update  # iptables -V  iptables v1.4.21  

So I figured I'd update to either the latest (from their git) or to the tagged 1.6.0.

I managed to add the libraries needed to get ./autogen.sh to run, then managed to ./configure --disable-nftables and make and make install.

Now i'm not sure how I can run this version to test it, and how to implement it if it works as the default iptables.

How do authentication servers handle thousands of CPU intensive logins?

Posted: 10 Apr 2021 08:20 PM PDT

Apologies if the answer is obvious, I'm just a little curious and couldn't nail down an answer elsewhere.

I'm used to seeing authentication servers use simple SHA-1 or SHA-256 to validate credentials, but best-practise these days is normally to use bcrypt for credential hashing.

The problem is that bcrypt is designed to use significant amounts of CPU and/or memory to limit the efficacy of brute-forcing algorithms. Easy for a single logon, but when hundreds or thousands of logons are involved, do server admins just throw extreme amounts of hardware at the problem, or do they tweak the bcrypt parameters to ensure a reasonable logon time for users?

keepalived master cannot reclaim virtual IP after recovered

Posted: 10 Apr 2021 09:02 PM PDT

Steps

  1. Start both master and slave
  2. Keep pinging virtual ip (i.e 192.168.10.100)
  3. Shutdown master
  4. Slave enters MASTER state
  5. Restart master
  6. Slave enters BACKUP state and Master enters MASTER state

Ping doesn't work after step 6. No server gets the virtual ip. (I checked with ip addr show eth1)

Master can get back the virtual ip until I restart the keepalived service.

How to make the master getting virtual ip without restart the service?

Keepalived configuration:

host1 (master)

vrrp_instance VI_1 {      state MASTER      interface eth1      virtual_router_id 51      priority 101      advert_int 1      authentication {          auth_type PASS          auth_pass secret      }      virtual_ipaddress {          192.168.10.100      }  }  

host2 (slave)

vrrp_instance VI_1 {      state BACKUP      interface eth1      virtual_router_id 51      priority 100      advert_int 1      authentication {          auth_type PASS          auth_pass secret      }      virtual_ipaddress {          192.168.10.100      }  }  

Configuring IIS ARR for backend client certificate authentication

Posted: 10 Apr 2021 10:03 PM PDT

I have an IIS server configured with ARR to reverse proxy requests to a backend server. The backend server requires client certificate authentication, however, it only needs to authenticate the reverse proxy (not the end user).

The end user authentication is passed inside the content of the request and is not the problematic part.

End User -->-- IIS with ARR -->(mutual SSL)>-- Backend web server

How does one configure the client certificate in IIS or ARR?

When searching around, I often find questions and threads related to forwarding the client certificate from the end user to the backend server and this is not possible. Further, these usually indicate to turn off client certificate authentication on the backend server but this must remain on.

Here are some questions I found, but they all relate to the end-user client certificate:

Unstick a reboot when PsKill doesn't work

Posted: 10 Apr 2021 05:07 PM PDT

I tried to remote into a server today and got stuck during login. So I tried to reboot with:

shutdown -r -m \\computername -t 10 -f  

And nothing seemed to happen. So I tried it again and got:

computername: A system showdown is in progress.(1115)  

So googling around for ways to unstick it, I came across this which suggested using PSKill. So I downloaded PSTools and tried:

PsKill \\computername winlogon  

But now that just sticks at:

Starting PsKill service on computername...  

Now what? Any suggestions from here?

Task Scheduler with Virtual Accounts, possible?

Posted: 10 Apr 2021 07:51 PM PDT

Currently I'm using LOCAL SERVICE as the user account for various regular tasks, and was wondering if it was possible to use a Virtual Account instead.

Task Scheduler seems to reject NT SERVICE\ style account names.

How to unblock service discovery for IPv4 via Avahi?

Posted: 10 Apr 2021 10:03 PM PDT

On a Debian 6.0.6 system (squeeze) I am having trouble resolving a host using Avahi and IPv4. Here is a sample output:

: nr@homedog 10102 ; avahi-browse -a  +   eth0 IPv6 yorkie [00:1f:3b:d8:67:1d]     Workstation          local  +   eth0 IPv6 homedog [bc:5f:f4:5a:b1:73]    Workstation          local  +   eth0 IPv4 homedog [bc:5f:f4:5a:b1:73]    Workstation          local  

Notice that homedog, the local machine, is visible both on IPv6 and IPv4. But yorkie, the remote machine, is visible only on IPv4. And avahi-resolve-host-name -4 yorkie.local hangs with no result.

EDIT: The situation is symmetric: yorkie sees itself on IPv4 and IPv6, but it sees homedog on IPv6 only.

On yorkie, the output from iptables -vnL is

Chain INPUT (policy ACCEPT 109K packets, 98M bytes)   pkts bytes target     prot opt in     out     source   destination         Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source   destination      Chain OUTPUT (policy ACCEPT 108K packets, 94M bytes)   pkts bytes target     prot opt in     out     source   destination  

(To make the display fit StackExchange without wrapping, I have taken a couple of liberties with horizontal space.) The display on homedog is identical except for the numbers: for all three, it displays 0 packets and 0 bytes. (I have no clue how to interpret these outputs, but it may be informative that yorkie's current uptime is 41 days and homedog's current uptime is 6 hours.)

I found a closed ticket at http://avahi.org/ticket/297, which suggests that the problem is some sort of firewall configuration. I am a complete novice in this area, and through web search I have been unable to inform myself about how to use the iptables command to diagnose or repair the problem. I found another ticket as Debian bug 547974, but this bug was closed without explaining how to fix the problem.

The hypothesis is that somehow the service-discovery packet is being blocked—I don't know on which machine. Can anyone say how to discover which machine is blocking the packet and how to reconfigure it so Avahi discovers the IPv4 address?

Run http server behind proxy

Posted: 10 Apr 2021 06:07 PM PDT

I've been trying to get lighttpd or apache2 (I prefer lighttpd) to work behind a proxy but no luck so far.

What I want is to run lighttpd (or port 80) behind a proxy, so that when someone goes to some.website.com, and the DNS for that domain is pointed to the IP address of the proxy server, they end up on my http server's page.

This would allow me to use the server's resources, while keeping it's IP address hidden.

Unfortunately, using the proxychains program did not work. For lighttpd it gave the error getaddrinfo failed: Unknown error ' ::' and proxychains apache2 start started just fine, but it didn't seem to do anything. I did test if the proxychains program itself worked, and it used the proxy just fine using curl on a what-is-my-ip type of website.

If you're wondering; I am temporarily using a homeserver, and I don't want to make my IP address public.

Any ideas? Both a HTTPS proxy (squid) or a SOCKS5 (dante) proxy will do just fine.

How do you configure QoS for Skype?

Posted: 10 Apr 2021 07:52 PM PDT

On our office network (26 people), some users have complained of poor Skype call quality, particularly in the upstream direction. I wanted to ask, how do I identify Skype traffic, considering that it uses a random port, in order that I might prioritise it at the router level?

No comments:

Post a Comment