Wednesday, June 15, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Issues configuring SAML authentication in Apache Guacamole behind a HAProxy

Posted: 15 Jun 2022 08:50 PM PDT

I've deployed an Apache Guacamole server and trying to configure SSO using SAML with a Cloud IdaaS. HAproxy is in front of the Guacamole server, providing SSL offloading. Apache Guacamole was configured following the tutorial on the Guacamole website.

When I attempt to authenticate using SAML, I am finding myself in a redirect loop. The following message is showing up in the Tomcat logs:

03:45:29.364 [http-nio-8080-exec-9] WARN  o.a.g.a.s.a.AssertionConsumerServiceResource - Authentication attempted with an invalid SAML response: SAML response did not pass validation: The response was received at http://my.personal.domain/guacamole/api/ext/saml/callback instead of https://my.personal.domain/guacamole/api/ext/saml/callback  

I've checked the setting in the IdP and confirmed that everything is indeed configured for HTTPS. I wonder if the issue has something to do with traffic between HAProxy and Guacamole being HTTP, but I don't know how or what to do to change that. I'm happy to use a self-signed certificate between HAProxy and Guacamole since they are both on a protected network.

Any ideas you could share would be much appreciated.

DRBD does not sync files even though it is UpToDate

Posted: 15 Jun 2022 09:45 PM PDT

I recently set up a DRBD replication on two nodes, did the setup, mounted the drbd disk with "create-md", "up", "connect", etc. I, apparently, was successful when it shows the status "UpToDate", however when I create any file in /var/www, it does not replicate on the secondary, I've tried everything and I can't find out what the problem is. P.S: The first sync has already been done and it still doesn't work

Primary Node Secondary Node /etc/drbd.conf

SO_NAME="Fedora Linux" VERSION="36 (Workstation Edition)"

Nginx Wordpress Subdir + wp admin

Posted: 15 Jun 2022 09:16 PM PDT

I know there are other similar to this thread, but I already tried for days and can not get this through, currently, with the below nginx configuration on Ubuntu 18

server {          listen 80;          root /var/www/html/wordpress/public_html;          index index.php index.html;          server_name "example.com";        access_log /var/log/nginx/SUBDOMAIN.access.log;          error_log /var/log/nginx/SUBDOMAIN.error.log;     location /blog {           index index.php index.html index.htm;           try_files $uri $uri/ /blog/index.php?q=$uri&$args;            location ~ \.php$ {                       include snippets/fastcgi-php.conf;                         fastcgi_pass unix:/run/php/php7.2-fpm.sock;          }      }            location ~ /\.ht {                       deny all;          }            location = /favicon.ico {                       log_not_found off;                       access_log off;          }            location = /robots.txt {                       allow all;                       log_not_found off;                       access_log off;         }            location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {                       expires max;                       log_not_found off;         }}  

Here is my folder current structure

  • /var/www/html/wordpress/public_html -> include folder blog and one simple index.html (not serving wordpress)
  • /var/www/html/wordpress/public_html/blog -> is my wordpress files location

However, there are two wrong parts I need to solve it.

  1. It display "mysite.com/blog" okay but when i click on any blog or url -> it shows " example.com/?p=1" and therefore leads the to simple index.html at location / (" example.com")
  2. I can not access to wp-admin also, it will redirect to this if i put "example.com/blog/wp-admin" -> "http://example.com/wp-login.php?redirect_to=http%3A%2F%2Fexample.com%2Fblog%2Fwp-admin%2F&reauth=1 -> then 404

Thank you so much.

Update 2: I tried changing the WP_SITEURL to example.com AND WP_HOME to example.com/blog also modified these 2 records in mysql wp-options table. however, when i tried access /blog/wp-admin -> http://example.com/blog/wp-admin/example.com/blog/wp-login.php?redirect_to=http%3A%2F%2Fexample.com%2Fblog%2Fwp-admin%2F&reauth=1 -> 404

Extended Support for Windows Server 2008 on Azure

Posted: 15 Jun 2022 08:02 PM PDT

We have a SharePoint 2010 Farm on premise. The Extended support for Windows server 2008 ends in January 2020. The Microsoft documentation here(https://support.microsoft.com/en-in/help/4456235/end-of-support-for-windows-server-2008-and-windows-server-2008-r2) mentions that if the Windows 2008 servers are migrated to Azure, the customers would get 3 additional years of Critical and Important security updates at no additional charge. We would like to know if the support for SharePoint 2010 and the SQL Server 2008 R2 support would also be extended? What are the Microsoft guidelines for SharePoint 2010 and SQL Server 2008?

Connection failure when trying to access RDP connection - HAproxy

Posted: 15 Jun 2022 09:38 PM PDT

I just deployed HAproxy on an Ubuntu server LTS 18.04.2 I configured remote desktop balancing for two TS servers. When trying to connect to the ip of my HAproxy server through a server with windows 10, it presents the following error:

The connection has been terminated because an unexpected server authentication certificate has been installed on the remote computer.

I tried connecting through Windows Server 2008 R2 and a computer with Windows Server 2012 R2 installed and did not have this problem.

Now any computer with Windows 10 displays this message when I try to connect.

Follows the lines of my HAproxy.cfg file:

global      log /dev/log    local0      log /dev/log    local1 notice      chroot /var/lib/haproxy      stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners      stats timeout 30s      user haproxy      group haproxy      daemon      ssl-server-verify none      # Default SSL material locations      ca-base /etc/ssl/certs      crt-base /etc/ssl/private        # Default ciphers to use on SSL-enabled listening sockets.      # For more information, see ciphers(1SSL). This list is from:      #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/      # An alternative list with additional directives can be obtained from      #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy      ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS      ssl-default-bind-options no-sslv3    defaults      log global      mode    http      option  httplog      option  dontlognull          timeout connect 5000          timeout client  50000          timeout server  50000      errorfile 400 /etc/haproxy/errors/400.http      errorfile 403 /etc/haproxy/errors/403.http      errorfile 408 /etc/haproxy/errors/408.http      errorfile 500 /etc/haproxy/errors/500.http      errorfile 502 /etc/haproxy/errors/502.http      errorfile 503 /etc/haproxy/errors/503.http      errorfile 504 /etc/haproxy/errors/504.http      frontend ft_rdp    mode tcp    bind ip_haproxy:3389 name rdp    timeout client 1h    log global    option tcplog    tcp-request inspect-delay 2s    tcp-request content accept if RDP_COOKIE    default_backend bk_rdp  backend bk_rdp    mode tcp    balance leastconn    persist rdp-cookie    timeout server 1h    timeout connect 4s    log global    option tcplog    option tcp-check    #tcp-check connect port 3389 ssl    default-server inter 3s rise 2 fall 3    server srv**  ip_server:3389 weight 10 check    server srv**2 ip_server:3389 weight 10 check  

Chroot a user in Amazon EC2 instance

Posted: 15 Jun 2022 08:02 PM PDT

I've got an Amazon Linux AMI machine running 2016.09 version. I've recently created a user and I'm able to connect using its credentials (private key).

This user is intended to be used by a third party and I want to restrict its access to its home directory (or subdirectory), so that it can't access any other folder (nor list any other folder/file).

I assume I do need to configure some sort of chroot directory using my sshd_config file, located in /etc/ssh/sshd_config. I've been able to modify its content so that it looks like the following piece of text:

#override default of no subsystems  #Subsystem       sftp    /usr/libexec/openssh/sftp-server  Subsystem      sftp    internal-sftp    Match User myuser         PasswordAuthentication yes         ChrootDirectory /home/myuser/ftp_folder         AllowTCPForwarding no         X11Forwarding no         ForceCommand internal-sftp  

Just after saving changes, I restart ssh service by typing sudo service sshd restart.

Unfortunately, I can't login with those changes (either ssh or using ftp):

$ ssh -i 'G:\AWS\ec2_keys\myuser.pem' myuser@ec2-XXXXXX-XX.compute-    1.amazonaws.com -vvv    Authenticated to ec2-XXXXXX-XX.compute-1.amazonaws.com ([YYY.YYY.YYY.YYY]:22).  debug1: channel 0: new [client-session]  debug3: ssh_session2_open: channel_new: 0  debug2: channel 0: send open  debug3: send packet: type 90  debug1: Requesting no-more-sessions@openssh.com  debug3: send packet: type 80  debug1: Entering interactive session.  debug1: pledge: network  debug3: send packet: type 1  debug1: channel 0: free: client-session, nchannels 1  debug3: channel 0: status: The following connections are open:  #0 client-session (t3 r-1 i0/0 o0/0 fd 4/5 cc -1)    Connection to ec2-XXXXXX-XX.compute-1.amazonaws.com closed by remote host.  Connection to ec2-XXXXXX-XX.compute-1.amazonaws.com closed.  Transferred: sent 2328, received 1996 bytes, in 0.0 seconds  Bytes per second: sent 60664.3, received 52012.8  debug1: Exit status -1  

What am I missing in the configuration? Thanks!!

How to identify libvirt_volume id in terraform for reuse in split configuration files

Posted: 15 Jun 2022 09:02 PM PDT

Setup

I am using a remote state files that are stored on external cloud storage and my terraform resources are split in different modules. I want to download the base images only once in an images module and would like to reference them in the definitions of each domain I want to create. Both the domain as well as the image file are defined by a 3rd party terraform provider for libvirt. The state files are separated by component as terraform would like to destroy resources when terraform apply-ing the desired state for individual components one at a time.

https://github.com/dmacvicar/terraform-provider-libvirt

Terraform Code

images/main.tf

resource "libvirt_volume" "centos-7" {     name = "centos-7.qcow2"     pool = "default"     source = "http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"     format = "qcow2"  }  

bastion-host/main.tf

# Create a volume to attach to the "mgmt_bastion" domain as main disk  resource "libvirt_volume" "mgmt_bastion_volume" {     name = "mgmt_bastion.qcow2"     base_volume_id = "${libvirt_volume.centos-7.id}"  }  

terraform.state

libvirt_volume.centos-7:    id = /var/lib/libvirt/images/centos-7.qcow2    format = qcow2    name = centos-7.qcow2    pool = default    size = 8589934592    source = http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2  

Issue

The resources work fine when I define them in the same terraform configurations. However each domain then would like to download the same base image. The state files are separated so that building one domain would not delete other domains/resources that are not defined in a particular main.tf file. When I define the image outside in a different module, of course I cannot reference it:

Error: resource 'libvirt_volume.mgmt_bastion_volume' config: unknown resource 'libvirt_volume.centos-7' referenced in variable libvirt_volume.centos-7.id  

Possible solutions / What I've tried

When using cloud providers like AWS, usually this is not an issue and may be solved by configuring data sources and filtering. https://www.terraform.io/docs/configuration/data-sources.html

However using the libvirt provider requires some extra thoughts.

I know how to terraform show in order to inspect the current terraform state files of another component. There I can read the ids from tbe shell.

Maybe I just have to run an external script in order to fill a variable with the id that I can then repeatedly use in terraform dsl.

Maybe I have to hardcode the id. Seems like it I can use the path for that: /var/lib/libvirt/images/centos-7.qcow2 However I am no friend of hardcoding and I my tests showed, that some images additionally contain uuids to reference them: /var/lib/libvirt/images/commoninit.iso;5a79dcf5-4420-5169-b9f4-5340e9904944

Therefore I would like to learn a better way to solve this generically when next time the resource is not so easily identifyable.

Edit:

Hardcoding the path did not work:

libvirt_volume.mgmt_bastion_volume: Creating...    base_volume_id: "" => "centos-7.qcow2"    name:           "" => "mgmt_bastion.qcow2"    pool:           "" => "default"    size:           "" => "<computed>"  Failed to save state: Resource not found  

True for full and partial path to the image file.

Questions

  • How do I get the id of the already downloaded file in order to specify it as CoW backing volume? (without manually looking into the state files and hardcoding the id)
  • Is there a better way to reference the same cloud image in different terraform definitions without downloading it again? After all, I want to reference the same base image to generate lots of libvirt domains in a space efficient manner.

Exchange 2013 -> 2016 Move Requests Stuck

Posted: 15 Jun 2022 10:06 PM PDT

I'm trying to migrate users from Exchange 2013 to Exchange 2016 but when I create a move request, even with only 1 mailbox in the queue, with a size of a few kilobytes, it'll inevitably end up at RelinquishedWlmStall.

There are no performance issues on either server, and I initially ran the move overnight. Any pointers to solutions or where I could gather more information about the issue?

Here's what I've tried so far:

  • Changed HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MSExchange ResourceHealth.
  • Used the Highest and Emergency priorities.
  • Adjusted MSExchangeMailboxReplication.exe.config.
  • Executed Get-ExchangeServer | ForEach {New-SettingOverride -Component "WorkloadManagement" -Name "$_ MRS Override" -Server $.Name -Section MailboxReplicationService -Reason "$ Temporary Move" -Parameters Classification=Urgent -MinVersion 15.0}
  • Restarted both servers multiple times.

Here are the move statistics:

ArchiveGuid                            :  Status                                 : InProgress  StatusDetail                           : RelinquishedWlmStall  SyncStage                              : None  Flags                                  : IntraOrg, Pull  RequestStyle                           : IntraOrg  Direction                              : Pull  IsOffline                              : False  Protect                                : False  DoNotPreserveMailboxSignature          : False  Priority                               : Normal  WorkloadType                           : Local  Suspend                                : False  SuspendWhenReadyToComplete             : False  IgnoreRuleLimitErrors                  : False  RecipientTypeDetails                   : UserMailbox  SourceVersion                          : Version 15.0 (Build 1320.0)  TargetVersion                          : Version 15.1 (Build 225.0)  SourceArchiveDatabase                  :  SourceArchiveVersion                   :  SourceArchiveServer                    :  TargetArchiveDatabase                  :  TargetArchiveVersion                   :  TargetArchiveServer                    :  RemoteHostName                         :  RemoteGlobalCatalog                    :  StartAfter                             :  CompleteAfter                          :  RemoteCredentialUsername               :  RemoteDatabaseName                     :  RemoteDatabaseGuid                     :  RemoteArchiveDatabaseName              :  RemoteArchiveDatabaseGuid              :  TargetDeliveryDomain                   :  ArchiveDomain                          :  BadItemLimit                           : 10  BadItemsEncountered                    : 0  LargeItemLimit                         : 0  LargeItemsEncountered                  : 0  AllowLargeItems                        : True  StartTimestamp                         :  InitialSeedingCompletedTimestamp       :  FinalSyncTimestamp                     :  CompletionTimestamp                    :  SuspendedTimestamp                     :  OverallDuration                        : 05:07:15.8805147  TotalFinalizationDuration              : 00:00:00  TotalDataReplicationWaitDuration       : 00:00:00  TotalSuspendedDuration                 : 00:00:00  TotalFailedDuration                    : 00:00:00  TotalQueuedDuration                    : 00:02:07.1040967  TotalInProgressDuration                : 01:50:11.4364136  TotalStalledDueToCIDuration            : 00:45:05.4836894  TotalStalledDueToHADuration            : 00:00:00  TotalStalledDueToMailboxLockedDuration : 00:00:00  TotalStalledDueToReadThrottle          : 00:00:00  TotalStalledDueToWriteThrottle         : 00:00:00  TotalStalledDueToReadCpu               : 00:00:00  TotalStalledDueToWriteCpu              : 00:00:00  TotalStalledDueToReadUnknown           : 00:00:00  TotalStalledDueToWriteUnknown          : 00:00:00  TotalTransientFailureDuration          : 00:00:00  TotalProxyBackoffDuration              : 00:00:00  TotalIdleDuration                      : 00:40:18.5876769  MRSServerName                          :  TotalMailboxSize                       : 64.08 MB (67,188,711 bytes)  TotalMailboxItemCount                  : 295  TotalArchiveSize                       :  TotalArchiveItemCount                  :  BytesTransferred                       : 0 B (0 bytes)  BytesTransferredPerMinute              : 0 B (0 bytes)  ItemsTransferred                       : 0  PercentComplete                        : 0  CompletedRequestAgeLimit               : 7.00:00:00  PositionInQueue                        :  InternalFlags                          : None  FailureCode                            :  FailureType                            :  FailureSide                            :  Message                                : Informational: The request has been temporarily postponed due to unfavorable                                           server health or budget limitations. MRS will attempt to continue processing                                           the request again after ****.  FailureTimestamp                       :  IsValid                                : True  ValidationMessage                      :  DiagnosticInfo                         :  Report                                 :  ObjectState                            : New  

Here's a cleaned-up report:

[EXCHANGE2013] '' created move request.  [EXCHANGE2016] The Microsoft Exchange Mailbox Replication service 'exchange2016.hostname' (15.1.225.37 caps:7FFF) is examining the request.  [EXCHANGE2016] Connected to target mailbox 'uuid1 (Primary)', database 'Target Database', Mailbox server 'exchange2016.hostname' Version 15.1 (Build 225.0).  [EXCHANGE2016] Connected to source mailbox 'uuid1 (Primary)', database 'Source Database', Mailbox server 'exchange2013.hostname' Version 15.0 (Build 1320.0), proxy server 'exchange2013.hostname' 15.0.1320.0 caps:0400001F7FFFFFCB07FFFF.  [EXCHANGE2016] Relinquishing job because of large delays due to unfavorable server health or budget limitations.  [EXCHANGE2016] The Microsoft Exchange Mailbox Replication service 'exchange2016.hostname' (15.1.225.37 caps:7FFF) is examining the request.  [EXCHANGE2016] Connected to target mailbox 'uuid1 (Primary)', database 'Target Database', Mailbox server 'exchange2016.hostname' Version 15.1 (Build 225.0).  [EXCHANGE2016] Connected to source mailbox 'uuid1 (Primary)', database 'Source Database', Mailbox server 'exchange2013.hostname' Version 15.0 (Build 1320.0), proxy server 'exchange2013.hostname' 15.0.1320.0 caps:0400001F7FFFFFCB07FFFF.  [EXCHANGE2016] Relinquishing job because of large delays due to unfavorable server health or budget limitations.  [EXCHANGE2016] The Microsoft Exchange Mailbox Replication service 'exchange2016.hostname' (15.1.225.37 caps:7FFF) is examining the request.  [EXCHANGE2016] Connected to target mailbox 'uuid1 (Primary)', database 'Target Database', Mailbox server 'exchange2016.hostname' Version 15.1 (Build 225.0).  [EXCHANGE2016] Connected to source mailbox 'uuid1 (Primary)', database 'Source Database', Mailbox server 'exchange2013.hostname' Version 15.0 (Build 1320.0), proxy server 'exchange2013.hostname' 15.0.1320.0 caps:0400001F7FFFFFCB07FFFF.  [EXCHANGE2016] Relinquishing job because of large delays due to unfavorable server health or budget limitations.  [EXCHANGE2016] The Microsoft Exchange Mailbox Replication service 'exchange2016.hostname' (15.1.225.37 caps:7FFF) is examining the request.  [EXCHANGE2016] Connected to target mailbox 'uuid1 (Primary)', database 'Target Database', Mailbox server 'exchange2016.hostname' Version 15.1 (Build 225.0).  [EXCHANGE2016] Connected to source mailbox 'uuid1 (Primary)', database 'Source Database', Mailbox server 'exchange2013.hostname' Version 15.0 (Build 1320.0), proxy server 'exchange2013.hostname' 15.0.1320.0 caps:0400001F7FFFFFCB07FFFF.  [EXCHANGE2016] The Microsoft Exchange Mailbox Replication service 'exchange2016.hostname' (15.1.225.37 caps:7FFF) is examining the request.  [EXCHANGE2016] Connected to target mailbox 'uuid1 (Primary)', database 'Target Database', Mailbox server 'exchange2016.hostname' Version 15.1 (Build 225.0).  [EXCHANGE2016] Connected to source mailbox 'uuid1 (Primary)', database 'Source Database', Mailbox server 'exchange2013.hostname' Version 15.0 (Build 1320.0), proxy server 'exchange2013.hostname' 15.0.1320.0 caps:0400001F7FFFFFCB07FFFF.  [EXCHANGE2016] Relinquishing job because of large delays due to unfavorable server health or budget limitations.  

And another related error I could find:

MigrationTransientException: Failed to communicate with the mailbox database. --> Failed to communicate with the mailbox database. --> MapiExceptionMdbOffline: Unable to make connection to the server. ‎(hr=0x80004005, ec=1142)‎ Diagnostic context: Lid: 41192 dwParam: 0x1 Lid: 63464 Lid: 34792 StoreEc: 0x6AB Lid: 51176 StoreEc: 0x80040115 Lid: 48104 Lid: 39912 StoreEc: 0x80040115 Lid: 41192 dwParam: 0x2 Lid: 49384 Lid: 51176 StoreEc: 0x476 Lid: 48104 Lid: 39912 StoreEc: 0x476 Lid: 41192 dwParam: 0x0 Lid: 49064 dwParam: 0x1 Lid: 37288 StoreEc: 0x6AB Lid: 49064 dwParam: 0x2 Lid: 38439 EMSMDBPOOL.EcPoolConnect called [length=48] Lid: 54823 EMSMDBPOOL.EcPoolConnect returned [ec=0x476][length=20][latency=31] Lid: 53361 StoreEc: 0x476 Lid: 51859 Lid: 33649 StoreEc: 0x476 Lid: 43315 Lid: 58225 StoreEc: 0x476 Lid: 39912 StoreEc: 0x476 Lid: 54129 StoreEc: 0x476 Lid: 50519 Lid: 59735 StoreEc: 0x476 Lid: 59199 Lid: 27356 StoreEc: 0x476 Lid: 65279 Lid: 52465 StoreEc: 0x476 Lid: 60065 Lid: 33777 StoreEc: 0x476 Lid: 59805 Lid: 52487 StoreEc: 0x476 Lid: 19778 Lid: 27970 StoreEc: 0x476 Lid: 17730 Lid: 25922 StoreEc: 0x476  

Icecast port forwarding not working

Posted: 15 Jun 2022 10:06 PM PDT

I have a PC that I'm running icecast server on. I can access my server on my LAN by using 192.168.1.2:8000. In my netgear router I forwarded external port 8000 to 192.168.1.2:8000.

However when I try to access externalip:8000 the connection times out. What could be causing this?

Edit: I should add I've tried using two different PCs for the server, in two different locations, with two different isps and routers. Both windows computers.

Dell R900 memory bank mismatch

Posted: 15 Jun 2022 09:02 PM PDT

We've got two Dell R900 servers deployed in a well known managed hosting provider in the US. One of the Dell R900 servers has had its 128 GB memory (32x 4 GB) swapped out 6 times now. Every time the server chassis has reported the memory ECC fault at a different location to Dell OpenManage 6.5.

We've swapped out the complete chassis (including processors) twice and sent both into Dell for diagnostics and they claim to not find a problem.

Has anyone out there experienced anything along these lines and possibly know why the chassis display and OpenManage can't agree on the failure memory bank location?

No comments:

Post a Comment