Thursday, May 5, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Apache <RequireAny> only when HTTPS

Posted: 05 May 2022 12:31 PM PDT

I'd like to use Basic Auth only when HTTPS is used. Having a .htaccess like this the user must enter password twice

RewriteEngine On  RewriteOptions Inherit    # Rewrite to HTTPS (except for let's encrypt)  RewriteCond %{HTTPS} off  RewriteCond %{REQUEST_URI} !^/\.well-known/acme-challenge/.*$  RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]    <RequireAny>      AuthType Basic      AuthName "Top Secret"      AuthUserFile /is/htdocs/***/.htpasswd      Require valid-user  </RequireAny>  

Using this file:

How can I avoid the authentication for 'http://mysite.domain' here?

How to prevent copying of executables included in AMIs?

Posted: 05 May 2022 12:28 PM PDT

When publishing and AMI on the Marketplace or sharing an AMI with another account, how would one go about protecting the executables from being copied out?

I've looked into the documentation (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/building-shared-amis.html#public-amis-protect-yourself ), but is seems to only exclude paths from being part of the AMI in the first place. Could not find anything to impede access to the filesystem.

The intent is to share an AMI with another account without the possibility of accessing the contents of the filesystem. Particularly interested in protecting access to the binaries included.

How to redirect tailscale to shadowsocks

Posted: 05 May 2022 12:13 PM PDT

How to redirect tailscale traffic (TPC+UDP) through shadowsocks proxy on Linux? I've tried ss-redirect with no success.

Why might an end-user IP address be different when accessing different but co-hosted websites?

Posted: 05 May 2022 11:33 AM PDT

I am trying to understand the following observation.

We have two domain names, domain1.example and domain2.example. At a DNS level, there's an A record to an anycast address. Both domains resolve to the same address.

When the same user makes an HTTPS Web request to domain1.example and domain2.example, the user's IP address (per access log) is not consistent across the two domains but is consistent for each domain. In most cases, other users have identical IP addresses in both logs.

From a pure networking point of view, the packets should be routed using the same entry in the routing table since they are going to the same IP address. It seems something higher-level in the OSI stack is domain-aware and able to alter the pathway.

What might be interfering here?

How to protect email addresses after domain name expires?

Posted: 05 May 2022 11:20 AM PDT

I'm thinking of creating a website for a friend. This includes registering a domain name and setting up a couple email addresses (myfriend@myfriend.com, etc.)

If the friend doesn't maintain the domain registration, and the domain falls into the hands of a nefarious actor, how can I protect my friends email addresses?

If I'm not mistaken, it is a common tactic to register a defunct domain name and impersonate the associated email addresses.

What can be done to avoid this?

Performance issues on forwarded port

Posted: 05 May 2022 10:49 AM PDT

I've been working on this for a while, without much luck.

I'm using port forwarding to expose the ssh port on internal virtual machine to the outside world (port 8000 on the host). The host machine is Ubuntu 18.04LTS, 132 GB ram, AMD Epic 16/4 cores. The VM is Debian 11.

I am using ufw for general protection of the host. Connections can only be made from the campus network, and outside users have to vpn to the campus network before connecting. There are two subnets, one is a class B subnet that covers the entire campus, the other is the 10.0.0.0/8 which is used by the campus vpn and wifi and is routed inside the university. The planned use of the environment is for teaching 3rd year OS course.

I used https://www.cyberciti.biz/faq/how-to-configure-ufw-to-forward-port-80443-to-internal-server-hosted-on-lan/ as a basis, as well as a lot of other pages, not all of which I remember the URL.

The internal private network is 192.168.101.x. The /etc/ufw/before.rules I'm using are:

*nat  :PREROUTING ACCEPT [0:0]  # forward hostIP from campusBSubnet port 8000 to 192.168.1.101:22  # forward hostIP from campusVPNSubnet port 8000 to 192.168.1.101:22  -A PREROUTING -i br0 -d HostIP  -s campusBSubnet  -p tcp --dport 8000 -j  DNAT --to-destination 192.168.101.2:22  -A PREROUTING -i br0 -d HostIP  -s campusVPNSubnet  -p tcp --dport 8000 -j  DNAT --to-destination 192.168.101.2:22  # setup routing  -A POSTROUTING -s 192.168.101.0/24 ! -d 192.168.101.0/24 -j MASQUERADE  COMMIT  ... rest of before.rules  

One modification from the referenced webpage was the addition of the source address restriction as the *nat rule seemed to bypass the ufw default deny and allow external address to access the forwarded port.

This works, and allows me to ssh into the VM from outside with ssh -p 8000 HostName. Things seem to work well, except that x2go doesn't really work at all. The initial display shows up (and a dialog asking for the root password to create a Color Display Device), and after that, nada, no response. The x2go client complains about no response after x seconds, where x is ~30 seconds.

I moved the vm to a bridged interface and gave it an external IP address, and connected directly to the vm over the bridge. Everything works fine. There is a small amount of lag on the x2go interface, but not really noticeable, but on the private network nada. I did some speed comparisons using scp to copy a large file to and from the VM both on the bridge and through the port forwarding, and while the port forwarding was a bit slower, it there was less than 10% difference in speed for the bulk transfer.

I am at a bit of a loss why there should be such a difference in performance between the port forwarding option and the bridged version. This is testing before setting up on a server with 192 GB ram and 180 cores for the corse. We will be running approximately 90 vms at a time out of 180 (two lab session), and we really can't give out 180 ipv4 addresses just for one course, so the bridge configuration is not really an option.

There appears to be a wide variety of options for x2go, and I really haven't look at any of those, so if that is a possible solution, I wouldn't mine a pointer to which of the many options are available.

Based on one of the previous pages (which was either server fault or stack overflow) indicated that the built in libvirt NAT causes problems with port forwarding, so I used a routed private network and the third post routing rule in the figure above. Same performance issues.

Right now my backup strategy is visual studio code using the ssh option to edit on an external machine and compile and run on the vm. This seems to work fine over the ssh port forwarding.

Thanks.

DKIM_INVALID in Spamassassin only for emails sent from other emails in the same server

Posted: 05 May 2022 11:47 AM PDT

I have postfix/dovecot running with spamassassin on Centos.

I have Wordpress blogs with WP_SMTP plugin. And they're configured to use SMTP to send emails.

When I send emails from contact forms on the blogs, I receive emails in the "SPAM" folder. I noticed that the headers are:

spamd[12042]: spamd: result: . 0 - ALL_TRUSTED,DKIM_INVALID,DKIM_SIGNED,HTML_MESSAGE,T_SCC_BODY_TEXT_LINE scantime=30.1,size=3544,user=vmail,uid=994,required_score=5.0,rhost=localhost,raddr=127.0.0.1,rport=56530,mid=<023501d860a6$aba8e580$02fab080$@mydomain.com>,autolearn=no autolearn_force=no  

WP_SMTP has a test feature. I can choose to which email address send a test email. If I choose one of the email addresses handled by my post server, they also are marked with DKIM_INVALID.

However if I send exactly the same test email to mail-tester.com this is what I get: https://www.mail-tester.com/test-5oyf5qlg5

10/10 and DKIM is fine.

Also dmarcian says DKIM is fine.

When I send an email from my server to a different email on my server (on different domains), they also get DKIM_INVALID.

Can anyone please help to make spamassassin stop marking emails sent from Server X to Server X as spam (DKIM_INVALID)?

Or maybe I can get more info, logs, etc that can help to find an answer to this problem?

IMPORTANT! The server is using Unbound. I had to use Unbound, because Spamassassin was reporting:

ALL_TRUSTED,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FROM_IN_TO_AND_SUBJ,HTML_MESSAGE,MIME_HTML_ONLY,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED  

And I was told that many blacklisting services don't allow queries from freely available DNS servers, and that includes (or may include) whatever DNS my provider provides me fro DHCP/

As soon as I start to use Unbound and I change

/etc/resolv.conf  options trust-ad  ; generated by /usr/sbin/dhclient-script  search localdomain  nameserver 62.149.128.4  nameserver 62.149.132.4  nameserver 2001:4860:4860::8888  

to:

nameserver ::1  nameserver 127.0.0.1  options trust-ad  

Spamassassin stops adding URIBL_BLOCKED and starts adding DKIM_INVALID

EDIT - new test on dkimvalidator when it's set to use ISP DNS, and SpamAssassin Says that IS VALID:

    Message contains this DKIM Signature:      DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=anahatatantra.com;          s=default; t=1651775054;          bh=eLbRT3O7M2vjPZb7RNlV024z+3AiY4K/KDRXpOHkzTs=;          h=Date:To:From:Subject;          b=dQHMZ2vYfNwrS97GENyHHg8nlIlAVfHep6Sa8qxRp3GlgZ/P043njercbgsB1cFOp           ccANKwaGxUPlDJJ29u/lq66lNgmF2dqtFk8FQeQ2P2waguw+8QnXaGxYTIsQ+pgjct           5Ejt+C/KG94iNvSvq1RKKOv9qLnugQ3yjdgDA5Fs=                  Signature Information:      v= Version:         1      a= Algorithm:       rsa-sha256      c= Method:          relaxed/simple      d= Domain:          anahatatantra.com      s= Selector:        default      q= Protocol:              bh=                 eLbRT3O7M2vjPZb7RNlV024z+3AiY4K/KDRXpOHkzTs=      h= Signed Headers:  Date:To:From:Subject      b= Data:            dQHMZ2vYfNwrS97GENyHHg8nlIlAVfHep6Sa8qxRp3GlgZ/P043njercbgsB1cFOp           ccANKwaGxUPlDJJ29u/lq66lNgmF2dqtFk8FQeQ2P2waguw+8QnXaGxYTIsQ+pgjct           5Ejt+C/KG94iNvSvq1RKKOv9qLnugQ3yjdgDA5Fs=      Public Key DNS Lookup            Building DNS Query for default._domainkey.anahatatantra.com      Retrieved this publickey from DNS: k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCXSuXbbxQjrqMX01rwXL8qMwUxCZrjFPnZokm6TyCj9bY5c96148UKFfiOWcfAhTmIC//pL3f08Pk8scBSM34pRQ8mYQhhjnXR2JMPIeJOZ9eAparHJfxk6PNd/5O/aXzVC+1RFtSWLaUilnA+Jdafkhe/4zZ8/kKMuzxaatGXcwIDAQAB      Validating Signature            result = fail      Details: body has been altered      ....    0.1 DKIM_INVALID           DKIM or DK signature exists, but is not valid  

EDIT2 - new test on dkimvalidator when it's set to use ISP DNS, and SpamAssassin Says that DKIM_INVALID:

DKIM Information:  DKIM Signature    Message contains this DKIM Signature:  DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=anahatatantra.com;      s=default; t=1651776165;      bh=eLbRT3O7M2vjPZb7RNlV024z+3AiY4K/KDRXpOHkzTs=;      h=Date:To:From:Subject;      b=ULfradeaT2BjEsDf9WRw3VL0sYrg73KbcGuRSEIxAvRY7azThVUvbXL7QuqHyjjG+       MU+WHlhYHY++XMk9TC1dCeGro+UXQgZMRx0U7JA4+jvgZcTbbqegRTYijp9TW608WF       dU2ZVHTGUx3mO2IaF/r7RyezmmCSnUaANueVK1Zo=      Signature Information:  v= Version:         1  a= Algorithm:       rsa-sha256  c= Method:          relaxed/simple  d= Domain:          anahatatantra.com  s= Selector:        default  q= Protocol:          bh=                 eLbRT3O7M2vjPZb7RNlV024z+3AiY4K/KDRXpOHkzTs=  h= Signed Headers:  Date:To:From:Subject  b= Data:            ULfradeaT2BjEsDf9WRw3VL0sYrg73KbcGuRSEIxAvRY7azThVUvbXL7QuqHyjjG+       MU+WHlhYHY++XMk9TC1dCeGro+UXQgZMRx0U7JA4+jvgZcTbbqegRTYijp9TW608WF       dU2ZVHTGUx3mO2IaF/r7RyezmmCSnUaANueVK1Zo=  Public Key DNS Lookup    Building DNS Query for default._domainkey.anahatatantra.com  Retrieved this publickey from DNS: k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCXSuXbbxQjrqMX01rwXL8qMwUxCZrjFPnZokm6TyCj9bY5c96148UKFfiOWcfAhTmIC//pL3f08Pk8scBSM34pRQ8mYQhhjnXR2JMPIeJOZ9eAparHJfxk6PNd/5O/aXzVC+1RFtSWLaUilnA+Jdafkhe/4zZ8/kKMuzxaatGXcwIDAQAB  Validating Signature    result = fail  Details: body has been altered      ...    0.1 DKIM_INVALID           DKIM or DK signature exists, but is not valid  

Thus seems to me there is no difference to emails sent "outside", for example to mail-tester, or dkimvalidator.

The problem see

OpenVPN - how to have client web servers available on server's entire local network

Posted: 05 May 2022 11:31 AM PDT

Currently my OpenVPN client successfully connects to my OpenVPN Ubuntu server, and from my server's browser is accessible as 10.1.0.25. How to make it so that the client can be accessed on every computer in the server's local network under the IP 10.1.0.25?

Exchange Online Message Records Management - Online Archive by Modified Date?

Posted: 05 May 2022 10:10 AM PDT

So I followed the Microsoft documentation to get this setup --> https://docs.microsoft.com/en-us/microsoft-365/compliance/set-up-an-archive-and-deletion-policy-for-mailboxes?view=o365-worldwide. Specifically I'm looking to take any Exchange items more than 2 years old and move them over into the users' Online Archive.

After implementing this I now see that the item age is being determined as creation date, not modified date. And punching back into the configuration steps, there doesn't seem to be an option to change that logic. I'll include a screen shot of the M365 portal page where I configure the retention tag. Also included is a Powershell command showing a TriggerForRetention property for the retention tag. This property can't be modified, so I'm apparently stuck.

Going into the Data Lifecycle Management M365 Purview area I see there I can likewise create retention labels and policy. But there doesn't appear to be an option to move items modified before a threshold to the users' Online Archive. The only option is to do nothing or to delete the items.

Anyone offering tips on how to best facilitate what I'm looking to do?

enter image description here

enter image description here

When PF firewal runs local network traffic is slowed

Posted: 05 May 2022 10:10 AM PDT

OS FreeBSD-13.0p3

We have a gateway router (G) with three physical Ethernet interfaces. One (W1) is the WAN gateway. The other two (L1,L2) are connected to the same wire. L1 belongs to the 192.168.0.0/16 network. L2 belongs to our public routable network 123.123.123.0/25.

All single-homed internal hosts belong to one or the other networks exclusively. Some dual-homed hosts have one nic on L1 and the other on L2.

We use PF firewall on G.

If PF is not running then hosts on L2 can immediately connect to hosts on L1 and vice versa. If PF is running then there is a significant reduction in bandwidth which manifests itself with extremely long logon times and slow file transfers.

I was told that because L1 and L2 are on the same system that the OS network stack should route traffic between the two without the firewall being involved. But this seems not the case.

I realize that this is minimal information, however, in the event that someone recognizes the symptoms I am posting this here.

An explanation of what is going on is much appreciated.

Set header for all except one location

Posted: 05 May 2022 10:00 AM PDT

I'd like to set a header on all page request except for one. I've tried the following:

location ~ ^\/(?!allow-iframes) {     add_header 'X-Frame-Options' 'DENY';  }    

This has some unexpected behavior. It causes a 301 redirect on all pages except /allow-iframes.

I came across this but I can't use map since it only works inside http and not inside the server context.

Contiv-vswitch pod restarts when new node joins the cluster

Posted: 05 May 2022 09:36 AM PDT

I am facing one issue in contiv vpp cni

  1. Create k8s cluster with one master and one worker node using contiv vpp as k8s cni plugin.
  2. Launch any pod like ngnix.
  3. Join new worker node in the cluster.
  4. Contiv-vswitch pod getting restarted in master and old worker node. How to avoid it ? Can anyone please help ?

Openstack Networking

Posted: 05 May 2022 11:24 AM PDT

I'm currently setting up Openstack with kolla-ansible wallaby, version 12.3.1.dev95, all-in-one installation. \

My setup in VMWare:
Workstation 14.x
VM-OS: Ubuntu Server20.04 LTS
1 Bridge-Mode network interface
2 private Host-only networks (1 with DHCP network 203.1.2.0/25; range 203.1.2.1-203.1.2.126).
All networks are attached to the openstack-VM (ens33, ens34 and ens35).
ens34: 203.1.2.4/24
ens35: no IP assigned

The configuration for the globals.yml:
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
network_interface: ens34
neutron_external_interface: ens35
kolla_internal_vip_address: "203.1.2.4"
enable_haproxy: "no"
nova_console: "spice"

My Problem:
After setting up a public network with the following command

openstack network create --external --provider-physical-network physnet1 \  --provider-network-type flat public  

and a subnet

openstack subnet create --no-dhcp \  --allocation-pool start=203.1.2.150,end=203.1.2.199 --network public \  --subnet-range 203.1.2.128/25 public-subnet  

The public network is connected to a router (203.1.2.176) which should be pingable but isn't. Because the standard rules should allow that (from my point of view).

If you need more information, I'm happy to provide it :)

PS: I hope this is somehow an acceptable description of my problem ^^'

How to identify application that is generating ICMP echo requests on Windows 10?

Posted: 05 May 2022 10:41 AM PDT

My company security team has informed me that my workstation is pinging some "blacklisted" IP addresses. The enterprise security tool reporting this information sits in place of the usual Windows firewall, but it seems it is unable to tell which process is the culprit.

I had the device rebuilt about six months ago for the same reason, and I'm pretty sure it's just an application that's using a content-delivery network that happens to have also been used by some malware at some point; hence the blocklisted IPs.

Normally in this situation a combination of Wireshark, netstat, TCPView and other tools would help me nail down which process is generating the traffic. For ICMP echo requests, however, it seems that the source process is always a system DLL.

Some googling led to a page which does have some advice on how to narrow down the process by checking which have got the icmp.dll or iphlpapi.dll loaded. I've currently got dozens of processes with iphlpapi.dll loaded, so trying to narrow down which might be sending these requests is going to take quite some time.

Another issue is that these ICMP requests are sent very infrequently. Maybe a couple of times a day. So at the point I'm looking, the process might not even be running.

What I really need is a tool that I can leave running which will look for ICMP requests to these IP addresses, and as soon as they're seen it will identify the process that made them. Does such a thing exist? Is there another low-effort approach that I'm missing?

Linux User Creation & permission Script issue

Posted: 05 May 2022 11:12 AM PDT

Based on my requirement of dev sever ,I have Created a Linux function that will create dev user, but code I have tried find some issues , My requirement are like this:

  1. Take the users's first name and create a linux user
  2. usermod -a -G devs $1
  3. mkdir /home/$1/.ssh
  4. chmod 700 /home/$1/.ssh
  5. touch /home/$1/.ssh/authorized_keys
  6. chmod 600 /home/$1/.ssh/authorized_keys
  7. mkdir /var/www/html/$1-dev.abc.com/
  8. mkdir /var/log/httpd/$1-dev.abc.com/
  9. chown $1.devs /var/www/html/$1-dev.abc.com/
  10. chown $1.devs /var/log/httpd/$1-dev.abc.com/

If any of the $1 directories exist it would throw a notice to the executor. developer will have their own key so we can't copy it over. we could pass the key in when this is executed, or have this generate the key and output it on completion.

I have tried this :

#!bin/bash  sudo useradd -g devs username  sudo usermod -a -G devs exampleusername  mkdir -m 700 ${HOME}/.ssh  touch $HOME/.ssh/authorized_keys  chmod 600 ~/.ssh/authorized_keys  cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys  sudo chown -R $USER:$USER /var/www/html/dev.mysite.com/  sudo mkdir -p /var/log/dev.mysite.com/  sudo chmod -R 744 /var/log/dev.mysite.com/  sudo chown -R username /var/www/html/dev.mysite.com/  sudo chown ${USER:=$(/usr/bin/id -run)}:$USER /var/www/$dev.mysite.com  

can anyone advise me what wrong I have done with the script ?

Windows server 2019 activation

Posted: 05 May 2022 12:27 PM PDT

today something weird happening,

For the first time I installed a windows server, all working fine, but I installed the eval version of 2019 server, and I have converted my server to a Domain Controller, and added AD Role.

So of course when your server is on a DC you can't change eval version to the Standard version and I was blocked from activating with my licence key.

I've searched for many hours, tried with cmd "DISM" and "slmgr" etc... nothing works.

But today I connected via RDP to my server and magically it's on Standard Edition, nothing left I mean my DC is still ok, my AD is still working.

Does someone know why my server have decided to switch?

Internal IP is use by HTTP internal load balance, but IP address show "none"

Posted: 05 May 2022 10:40 AM PDT

I created an Internal HTTP Load balance by the code below.

I can see Internal HTTP Load balance is running with internal IP(var.my_iip)

But in VPC network > IP addresses, the "in use by" of internal IP(var.my_iip) is "none"

Why not "Forwarding rule http-ife"? and how to make it right?(to prevent someone delete this IP cuz it seem no in use)

resource "google_compute_address" "ilb-iip" {    name         = "ilb-iip"    address      = var.my_iip    address_type = "INTERNAL"    purpose      = "SHARED_LOADBALANCER_VIP"    subnetwork   = var.my_subnetwork  }  resource "google_compute_forwarding_rule" "http-ife" {    name                  = "http-ife"    ip_address            = google_compute_address.ilb-iip.self_link    ip_protocol           = "TCP"    load_balancing_scheme = "INTERNAL_MANAGED"    network               = var.my_network    port_range            = "80"    subnetwork            = var.my_subnetwork    target                = google_compute_region_target_http_proxy.target.id    region                = var.my_region    depends_on            = [google_compute_subnetwork.lb_subnet]  }  

Exchange Server Autodiscover external URL

Posted: 05 May 2022 10:09 AM PDT

All.

I am nearing the completion of the setup of our Exchange Server. The process has been a challenge, but I have enjoyed learning new things.
What I am working on now is setting up the autodiscover.
I cannot find any good information on properly setting up the external.
Does the External URL have to reside on the Exchange Server, or can it be on the Web Server?
If it has to be on the Exchange Server, what port do I have to assign in my router for it, as the 443 is for the Web Server?

The internal link is this.

https://mail-03.domain1.local/EWS/Exchange.asmx  

I found a script for creating the link.

Set-WebServicesVirtualDirectory -Identity Contoso\EWS(Default Web Site) -ExternalUrl         https://www.contoso.com/EWS/exchange.asmx -BasicAuthentication $true -InternalUrl     https://contoso.internal.com/EWS/exchange.asmx  

I have many domains which will have an email for each in the Exchange Server.
So I am assuming I will need to set up the linkage for each domain.
Using Thunderbird Email Client, it looks for the

https://www.contoso.com/EWS/exchange.xml     

not .asmx, so does that mean I will have to set up for both of them, and if so, how?

The files in the folder for the internal URL are config files.
Do I need to copy this folder to the External URL Folder and point the link at it?

I found this information about the DNS here SE/SF Autodiscover does not work - Exchange 2016
But he does not provide enough information in his write-up on the DNS.
Do I need to add the DNS entry in the

  1. DC Server DC DNS
  2. Web Server DNS

Thanks for any information you all can help me with. And if there has already been an explanation on SE for this information, by all means, please provide a link, as it did not show up in the suggestions for the title.

--UPDATE-- (I will post more as I continue reading)
I have many domains for all our different company sites.
I used LetsEncrypt for the SSL Cert for all the domains in a single Cert using the Wildcard method.
Within our IIS Server, each domain has an autodiscover.domain.com Binding.
In the SSL Cert, each domain shows that autodiscover. Subdomain binding.

Reading what "joyceshen" supplied in her post.

  1. Http redirect: (With a cert with multiple domains, will I still have to use something like this?)

  2. SRV autodiscover method: (This is the one I will test out first and see how it goes. I might use it if it works.)

--UPDATE DAY II--
I am making progress.
I am using the Redirect Method; with all DNS entries done, I could load Outlook with autodiscover.domain1.com.

I was able to load Outlook through the external domain because I followed the information on the DNS for the Internal DNS and created the two zones, Mail and Autodiscover.domain1.com, which is the reason I was able to load it. I still have not been able to load it through the actual external method yet for the redirect.

Thanks, Wayne

Redirect to folder only if root from other site referrer

Posted: 05 May 2022 10:12 AM PDT

I have a redirect from root to subfolder. If user visits https://example.com it redirects to https://example.com/subfolder. But I want it not to redirect if referrer is my site, so user can reach root page.

For example:

  1. User visits https://example.com
  2. It redirects to https://example.com/subfolder
  3. User visits https://example.com/subfolder/file.html
  4. there's a link on this page to https://example.com and he follows it
  5. It must open https://example.com and not to redirect

Here is my .htaccess:

RedirectMatch ^/$ https://example.com/  

Please, give me an advice to solve the problem, I'm poor on .htaccess rules.

Can I stop a SPF SOFTFAIL in Gmail when sending to and from addresses that have a mail alias?

Posted: 05 May 2022 09:29 AM PDT

I have a vanity domain (mydomain.com) hosted by Gandi and configured with mail aliases for my family that point to our respective Gmail addresses:

Alias Real address
me@mydomain.com me5678@gmail.com
mybrother@mydomain.com brother1234@gmail.com

and so on.

Gmail is configured to send email as the vanity address and I also have a SPF record set up:

v=spf1 include:_spf.google.com include:_spf.gpaas.net include:_mailcust.gandi.net ?all  

Although mail-tester.com reports that the SPF is set up correctly, it's possible to get a SOFTFAIL when sending an email from [anyone]@mydomain.com to [anyone else]@mydomain.com:

Sent from Sent to SPF result of email
me5678@gmail.com brother1234@gmail.com PASS
me5678@gmail.com mybrother@mydomain.com PASS
me@mydomain.com brother1234@gmail.com PASS
me@mydomain.com mybrother@mydomain.com SOFTFAIL

The headers when the email SOFTFAILs is as follows:

Delivered-To: brother1234@gmail.com  ARC-Authentication-Results: i=1; mx.google.com;         spf=softfail (google.com: domain of transitioning me5678@gmail.com does not designate 2001:4b98:dc4:8::230 as permitted sender) smtp.mailfrom=me5678@gmail.com  Return-Path: <me5678@gmail.com>  Received: from relay10.mail.gandi.net (relay10.mail.gandi.net. [2001:4b98:dc4:8::230])          by mx.google.com with ESMTPS id w4-20020a05600018c400b0020ac7a84cb7si9021160wrq.441.2022.05.01.02.22.05          for <brother1234@gmail.com>          (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);          Sun, 01 May 2022 02:22:06 -0700 (PDT)  Received-SPF: softfail (google.com: domain of transitioning me5678@gmail.com does not designate 2001:4b98:dc4:8::230 as permitted sender) client-ip=2001:4b98:dc4:8::230;  Authentication-Results: mx.google.com;         spf=softfail (google.com: domain of transitioning me5678@gmail.com does not designate 2001:4b98:dc4:8::230 as permitted sender) smtp.mailfrom=me5678@gmail.com  Received: from spool.mail.gandi.net (spool3.mail.gandi.net [217.70.178.212]) by relay.mail.gandi.net (Postfix) with ESMTPS id 51151240003 for <brother1234@gmail.com>; Sun,    1 May 2022 09:22:05 +0000 (UTC)  X-Envelope-To: mybrother@mydomain.com  Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by spool.mail.gandi.net (Postfix) with ESMTPS id 49A2CAC0C45 for <mybrother@mydomain.com>; Sun,    1 May 2022 09:22:04 +0000 (UTC)  Received: by mail-lf1-f48.google.com with SMTP id w19so20836346lfu.11          for <mybrother@mydomain.com>; Sun, 01 May 2022 02:22:04 -0700 (PDT)  Received: from smtpclient.apple (cpc1-sotn14-2-0-cust79.15-1.cable.virginm.net. [81.96.148.80])          by smtp.gmail.com with ESMTPSA id r7-20020a2e8e27000000b0024f3d1dae9asm761964ljk.34.2022.05.01.02.22.02          for <mybrother@mydomain.com>          (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);          Sun, 01 May 2022 02:22:02 -0700 (PDT)  From: Me <me@mydomain.com>  To: My Brother <mybrother@mydomain.com>  Received-SPF: pass (spool3: domain of gmail.com designates 209.85.167.48 as permitted sender) client-ip=209.85.167.48; envelope-from=me5678@gmail.com; helo=mail-lf1-f48.google.com;  Authentication-Results: spool.mail.gandi.net; dkim=none; dmarc=none; spf=pass (spool.mail.gandi.net: domain of me5678@gmail.com designates 209.85.167.48 as permitted sender) smtp.mailfrom=me5678@gmail.com  

Is there any way I can stop emails sent from mydomain.com to another address at mydomain.com failing SPF?

I am unable to provide a valid PEM file to HaProxy despite validating the PEM file and installing the self-signed certificate in the correct places

Posted: 05 May 2022 10:03 AM PDT

I will post my private key in its entirety because it is an example for development and debugging purposes.

This is the process by which I have created my PEM file:

https://serversforhackers.com/c/using-ssl-certificates-with-haproxy

sudo openssl genrsa -out example.dev.key 1024  sudo openssl req -new -key example.dev.key -out example.dev.csr  sudo openssl x509 -req -days 365 -in example.dev.csr -signkey example.dev.key -out example.dev.crt  sudo cat example.dev.crt example.dev.key | sudo tee example.dev.pem  

This is a self-signed certificate. The PEM file looks like this:

-----BEGIN CERTIFICATE-----  MIICcjCCAdsCFAFD5rYw03KeoC3z95/Ahl2O1x38MA0GCSqGSIb3DQEBCwUAMHgx  CzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRAwDgYDVQQHDAdGcmVt  b250MR4wHAYDVQQKDBVFdmVyZXggQ29tbXVuaWNhdGlvbnMxDDAKBgNVBAsMA2Rl  djEUMBIGA1UEAwwLZGV2ZWxvcG1lbnQwHhcNMjAxMjA5MTkwNTAyWhcNMjExMjA5  MTkwNTAyWjB4MQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEQMA4G  A1UEBwwHRnJlbW9udDEeMBwGA1UECgwVRXZlcmV4IENvbW11bmljYXRpb25zMQww  CgYDVQQLDANkZXYxFDASBgNVBAMMC2RldmVsb3BtZW50MIGfMA0GCSqGSIb3DQEB  AQUAA4GNADCBiQKBgQC8VVayel+HPW7xzmiy4SM/+JlkdUrcy9x8dR4ChxJ4ic+6  FgmsfDu+IE31GfzGBOnPsT+eVGI+0oX4mJpM0axnwdI7U25CrVSOe7pvTaNPlmWc  ptVb/DZQU+JjSna1fUuGU+f+Ile5U6YAuBjWDzfvizF9viGc95pIcpjftQYxTwID  AQABMA0GCSqGSIb3DQEBCwUAA4GBAKp3won6y2Y712spVbxjklzSgYRQCi8aBFZd  Nv3B5YvPg+MXIKCvj9fDdAwqQU1eBj5arF9bL2NixAdT3P+77db3FSPeje5CRd57  tHhRzFb0GSjiW304VI+v/ptfb95jA5CwvAF9uMvNHrTVT8Aln3pldpitvtP5LtpM  qa0iD2xG  -----END CERTIFICATE-----  -----BEGIN RSA PRIVATE KEY-----  MIICXQIBAAKBgQC8VVayel+HPW7xzmiy4SM/+JlkdUrcy9x8dR4ChxJ4ic+6Fgms  fDu+IE31GfzGBOnPsT+eVGI+0oX4mJpM0axnwdI7U25CrVSOe7pvTaNPlmWcptVb  /DZQU+JjSna1fUuGU+f+Ile5U6YAuBjWDzfvizF9viGc95pIcpjftQYxTwIDAQAB  AoGAVJQes1ixvhKg2IdSDcN+CSSj/rGORUpoYpxWNdxjNy7s0y1CeuvwCJqJaCGb  m3JpbpSzdW+AD6aL8/DUmtsvCUQEg4g49K8j/z942HqatZS3Zuucf7SHzWlvze+N  JDrQabnq53+zAvfqf8+8fhbuRxoanl120kdVcIutPgN38cECQQDsnBpKNRGc6jA8  +lh4KciqQ+32gY7Z+dsSW4i1vufLnTkbS1rO2DL0ruuZzjnOT4FuE7SU4T1N4Qni  3PRNG4PjAkEAy8RuuGWuzULJ7mE/F9SKnYcaugoVNsSMNTwObTytIQtvb66l4M06  Jp/BvLJy5Atys/WzhQ6xXuxR9zmS44QQpQJBAOooirQJ1QZvlZGjR86Tu20VkPi1  uwPpi26de6wx4//T9uIWLyYpPDR+r9clCnwsnrCre7kjN6JNJZWIiZWNt3UCQQCo  fYoMIdBz2+k7mt/f5Zik/2VjNhkqi0Vgc4N+YjDKZTlFAQYap7iQ3YMGdAw6cxjq  o51Ixch2tDRmmA3U4YwdAkBY19AJj4ZwcyfjDs9z7gy4MQcJp+YVNcuTLEd6z5HK  5ECCSy5S0UPwwxleUPUIv/nNrAchTUt5UtlgQuO7arPX  -----END RSA PRIVATE KEY-----  

Ok now that I have this, I validated it as follows. ALL KEYS HAVE R/W PERMISSIONS FOR ALL USERS:

https://www.ssl247.com/kb/ssl-certificates/troubleshooting/certificate-matches-private-key

openssl x509 –noout –modulus –in example.dev.crt | openssl md5    openssl rsa –noout –modulus –in example.dev.key | openssl md5    openssl req -noout -modulus -in example.dev.csr | openssl md5  

I am on Ubuntu 20.04 I installed the key in /etc/ssl and /etc/haproxy:

http://gagravarr.org/writing/openssl-certs/others.shtml#selfsigned-openssl

$ cd /etc/ssl  $ ln -s example.dev.crt `openssl x509 -hash -noout -in example.dev.crt`.0    ajorona@ajorona-box/etc/haproxy$ ls -l  total 12  drwxr-xr-x 2 root root 4096 Dec  8 17:55 errors  -rw-r--r-- 1 root root 1795 Dec  9 12:28 example.dev.pem  

Now my haproxy.cfg file has the following lines:

bind *:443 ssl crt /etc/haproxy/example.dev.pem  redirect scheme https if !{ ssl_fc }  

I validate my haproxy.cfg:

ajorona@ajorona-box:~/server $ haproxy -c -f haproxy.cfg   [ALERT] 343/123930 (114320) : parsing [haproxy.cfg:29] : 'bind *:443' : unable to load SSL certificate from PEM file '/etc/haproxy/example.dev.pem'.  [ALERT] 343/123930 (114320) : Error(s) found in configuration file : haproxy.cfg  [ALERT] 343/123930 (114320) : Fatal errors found in configuration.  ajorona@ajorona-box:~/server $ sudo haproxy -c -f haproxy.cfg   [ALERT] 343/123933 (114444) : parsing [haproxy.cfg:29] : 'bind *:443' : unable to load SSL certificate from PEM file '/etc/haproxy/example.dev.pem'.  [ALERT] 343/123933 (114444) : Error(s) found in configuration file : haproxy.cfg  [ALERT] 343/123933 (114444) : Fatal errors found in configuration.  

I've spent a full day on this, I can't really figure out why this is happening...

List the name servers in the DNS Zone properties GUI with Powershell

Posted: 05 May 2022 10:03 AM PDT

I have read through a listing of all the DNS powershell cmdlets and can't believe I didn't find a programmatic way of reading the below list. Am I missing it, or is there a .NET way to do so that I didn't also find?

enter image description here

Unable to authenticate OpenLDAP users on macOS clients "user not found: no secret in database"

Posted: 05 May 2022 09:39 AM PDT

I have an Ubuntu 16.04 server that is runnning an OpenLDAP server. I am able to see everything just fine:

serveradmin@Magic:~$ ldapsearch -x -H ldap://localhost -D cn=admin,dc=example,dc=com -W  Enter LDAP Password:   # extended LDIF  #  # LDAPv3  # base <dc=example,dc=com> (default) with scope subtree  # filter: (objectclass=*)  # requesting: ALL  #    # example.com  dn: dc=example,dc=com  objectClass: top  objectClass: dcObject  objectClass: organization  o: work  dc: example    # admin, example.com  dn: cn=admin,dc=example,dc=com  objectClass: simpleSecurityObject  ...    # Groups, example.com  dn: ou=Groups,dc=example,dc=com  objectClass: organizationalUnit  ou: Groups    ...    # Policies, example.com  dn: ou=Policies,dc=example,dc=com  objectClass: top  objectClass: organizationalUnit  ou: Policies  description: Password policy for users    # foo, People, example.com  dn: uid=foo,ou=People,dc=example,dc=com  objectClass: inetOrgPerson  objectClass: posixAccount  objectClass: shadowAccount  sn: foo  ...  

On my any macOS client of my choosing from OS High Sierra down to El Capitan I can run:

a0216:data admin$ dscl localhost -list /LDAPv3/example.com/Users  foo  bar  ...  

Which will retreive a list of all of my users.


I use a MacBook Pro 2016 edition running High Sierra. When I attept to authenticate as a user under the Directory Utility it successfully allows me to authenticate:

Jun 14 16:36:23 magic slapd[344850]: conn=1276 op=1 SRCH attr=uidNumber uid userPassword  Jun 14 16:36:23 magic slapd[344850]: conn=1276 op=1 SEARCH RESULT tag=101 err=0 nentries=1 text=  Jun 14 16:36:24 magic slapd[344850]: conn=1277 fd=19 ACCEPT from IP=10.0.1.20:65410 (IP=0.0.0.0:389)  Jun 14 16:36:24 magic slapd[344850]: conn=1277 fd=19 closed (connection lost)  Jun 14 16:36:24 magic slapd[344850]: conn=1278 fd=19 ACCEPT from IP=10.0.1.20:65411 (IP=0.0.0.0:389)  Jun 14 16:36:24 magic slapd[344850]: conn=1278 op=0 SRCH base="" scope=0 deref=0 filter="(objectClass=*)"  Jun 14 16:36:24 magic slapd[344850]: conn=1278 op=0 SRCH attr=supportedSASLMechanisms defaultNamingContext namingContexts schemaNamingContext saslRealm  Jun 14 16:36:24 magic slapd[344850]: conn=1278 op=0 SEARCH RESULT tag=101 err=0 nentries=1 text=  Jun 14 16:36:24 magic slapd[344850]: conn=1278 op=1 BIND dn="uid=foo,ou=People,dc=example,dc=com" method=128  Jun 14 16:36:24 magic slapd[344850]: conn=1278 op=1 BIND dn="uid=foo,ou=People,dc=example,dc=com" mech=SIMPLE ssf=0  Jun 14 16:36:24 magic slapd[344850]: conn=1278 op=1 RESULT tag=97 err=0 text=  

However if I attempt the same thing from an iMac running either High Sierra, or El Capitan I get the following:

Jun 14 16:40:04 magic slapd[344850]: conn=1345 op=3 SRCH attr=uidNumber uid userPassword  Jun 14 16:40:04 magic slapd[344850]: conn=1345 op=3 SEARCH RESULT tag=101 err=0 nentries=1 text=  Jun 14 16:40:04 magic slapd[344850]: conn=1358 fd=19 ACCEPT from IP=10.0.1.67:49545 (IP=0.0.0.0:389)  Jun 14 16:40:04 magic slapd[344850]: conn=1358 fd=19 closed (connection lost)  Jun 14 16:40:04 magic slapd[344850]: conn=1359 fd=19 ACCEPT from IP=10.0.1.67:49546 (IP=0.0.0.0:389)  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=0 SRCH base="" scope=0 deref=0 filter="(objectClass=*)"  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=0 SRCH attr=supportedSASLMechanisms defaultNamingContext namingContexts schemaNamingContext saslRealm  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=0 SEARCH RESULT tag=101 err=0 nentries=1 text=  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=1 BIND dn="" method=163  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=1 RESULT tag=97 err=14 text=SASL(0): successful result: security flags do not match required  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=2 BIND dn="" method=163  Jun 14 16:40:04 magic slapd[344850]: SASL [conn=1359] Failure: no secret in database  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=2 RESULT tag=97 err=49 text=SASL(-13): user not found: no secret in database  Jun 14 16:40:04 magic slapd[344850]: conn=1359 op=3 UNBIND  Jun 14 16:40:04 magic slapd[344850]: conn=1359 fd=19 close  

I have tried everything that I can possibly think of and have a feeling the answer is starring me directly in the face. Does anybody know why I keep getting No secret in database while attempting to login from an iMac ,and is there a simple solution to this issue?

I have done some research and come across a few things (IE this) however, what I have come across the directions and ideas are not clear, and it seems like it works differently for everyone. Any help or a point in the right direction would be greatly appreciated. Thank you

How do I completely turn off Windows Defender from PowerShell?

Posted: 05 May 2022 09:44 AM PDT

Set-MpPreference -DisableRealtimeMonitoring disables the first one, what are the specific switches to disable the others you see when you open the UI? I have not found a clear example of this in the docs and I don't feel like running EVERY disable switch because MS docs are bad.

enter image description here

nxlog parse_csv fails on odd character in IIS log file

Posted: 05 May 2022 12:05 PM PDT

I am parsing IIS logs.

This error shows in my nxlog eventlog:

2013-12-24 18:40:20 ERROR if-else failed at line 50, character 351 in C:\Program Files (x86)\nxlog\conf\nxlog.conf. statement execution has been aborted; procedure 'parse_csv' failed at line 50, character 225 in C:\Program Files (x86)\nxlog\conf\nxlog.conf. statement execution has been aborted; Invalid CSV input: '2012-06-20 14:31:37 10.1.0.16 GET /App_Themes/Authenticated/Styles/index.jsp - 80 - 192.168.0.93 "|dir 302 0 0 62'  

This is a better view of the line(s) in the log file

2012-06-20 14:31:37 10.1.0.16 GET /App_Themes/Authenticated/Styles/index.jsp - 80 - 192.168.0.93 |dir 302 0 0 62  

This is my nxlog.conf file; I cut it off at line 51 because the rest is just routing.

define ROOT C:\Program Files (x86)\nxlog    Moduledir %ROOT%\modules  CacheDir %ROOT%\data  Pidfile %ROOT%\data\nxlog.pid  SpoolDir %ROOT%\data  LogFile %ROOT%\data\nxlog.log    SuppressRepeatingLogs TRUE  LogLevel INFO      #<Extension fileop>  #    Module      xm_fileop  #    <Schedule>  #        Every   1 hour  #        Exec    file_cycle('%ROOT%\data\nxlog.log', 5);  #    </Schedule>  #</Extension>    <Extension syslog>      Module      xm_syslog  </Extension>    <Extension json>      Module      xm_json  </Extension>     <Extension w3c>      Module      xm_csv      Fields  $date, $time, $s-ip, $cs-method, $cs-uri-stem, $cs-uri-query, $s-port, $cs-username, $c-ip, $cs-User-Agent, $sc-status, $sc-substatus, $sc-win32-status, $time-taken      FieldTypes  string, string, string, string, string, string, string, string, string, string, string, string, string, string      Delimiter   ' '      QuoteChar   '"'      EscapeControl FALSE      UndefValue  -  </Extension>      <Input iis_in>      Module  im_file      File    "f:\\iislogs\\u_*.log"      ReadFromLast FALSE      Exec    if $raw_event =~ /^#/ drop();                        \                  else                                             \                  {                                                \                      w3c->parse_csv();                            \                      $EventTime = parsedate($date + " " + $time); \                      to_json(); \                  }  </Input>  
  1. Is this something I should worry about?
  2. Is this bombing out because of the " character?
  3. How can I avoid this?

EDIT: Further research shows that this is probably due to incorrectly coded characters in the "cs(User-Agent)" Field. Not sure what to do about that.

#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken  2012-02-04 22:09:37 10.1.0.16 GET /login.aspx - 80 - 192.168.0.93 +xa7 403 4 5 218  2012-02-04 22:09:37 10.1.0.16 GET /signup/ - 80 - 192.168.0.93 +xa7 302 0 0 15  2012-02-04 22:09:37 10.1.0.16 GET /signup/ - 80 - 192.168.0.93 " 302 0 0 15  2012-02-04 22:09:37 10.1.0.16 GET /signup/ - 80 - 192.168.0.93 ߧߢ 302 0 0 0  2012-02-04 22:09:37 10.1.0.16 GET /signup/ - 80 - 192.168.0.93 𧧰"" 302 0 0 15  2012-02-04 22:09:38 10.1.0.16 GET /webresource.axd - 80 - 192.168.0.93 +xa7 404 0 0 0  2012-02-04 22:09:38 10.1.0.16 GET /webresource.axd - 80 - 192.168.0.93 " 404 0 0 0  2012-02-04 22:09:38 10.1.0.16 GET /webresource.axd - 80 - 192.168.0.93 ߧߢ 404 0 0 0  2012-02-04 22:09:38 10.1.0.16 GET /webresource.axd - 80 - 192.168.0.93 𧧰"" 404 0 0 0  2012-02-04 22:09:41 10.1.0.16 GET /contactus.aspx - 80 - 192.168.0.93 +xa7 302 0 0 15  2012-02-04 22:09:41 10.1.0.16 GET /contactus.aspx - 80 - 192.168.0.93 " 302 0 0 15  2012-02-04 22:09:41 10.1.0.16 GET /contactus.aspx - 80 - 192.168.0.93 ߧߢ 302 0 0 15  2012-02-04 22:09:41 10.1.0.16 GET /contactus.aspx - 80 - 192.168.0.93 𧧰"" 302 0 0 0  

TFS 2010 restored database, TfsJobAgent cannot connect to database

Posted: 05 May 2022 11:00 AM PDT

I have a Team Foundation Server 2010 setup, in which I have TFS on one server and the databases on another. Recently the server with the databases completely crashed and I had to replace it. I restored the databases from backups, and changed the web.config file for tfs to point to the new server and most everything seems to be working.

I can connect to tfs, do checkouts/checkins, create work items, etc. However, I cannot queue builds, and the event log is filling up with errors saying that TfsJobAgent cannot connect to the database.

I was wondering if anyone knew if there was a way to configure TfsJobAgent to point to my new database. The event log details are below (TCOSRV1 is the machine running tfs, not the database server). Any help would be greatly appreciated.

TF53010: The following error has occurred in a Team Foundation component or extension:  Date (UTC): 7/11/2013 4:55:27 PM  Machine: TCOSRV1  Application Domain: TfsJobAgent.exe  Assembly: Microsoft.TeamFoundation.Framework.Server, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v2.0.50727  Service Host:   Process Details:    Process Name: TFSJobAgent    Process Id: 3520    Thread Id: 5304    Account name: NT AUTHORITY\NETWORK SERVICE    Detailed Message: There was an error during job agent execution. The operation will be retried. Similar errors in the next five minutes may not be logged.  Exception Message: TF246017: Team Foundation Server could not connect to the database. Verify that the server that is hosting the database is operational, and that network problems are not blocking communication with the server. (type DatabaseConnectionException)    Exception Stack Trace:    at Microsoft.TeamFoundation.Framework.Server.TeamFoundationSqlResourceComponent.TranslateException(Int32 errorNumber, SqlException sqlException, SqlError sqlError)     at Microsoft.TeamFoundation.Framework.Server.TeamFoundationSqlResourceComponent.TranslateException(SqlException sqlException)     at Microsoft.TeamFoundation.Framework.Server.TeamFoundationSqlResourceComponent.MapException(SqlException ex, QueryExecutionState queryState)     at Microsoft.TeamFoundation.Framework.Server.TeamFoundationSqlResourceComponent.HandleException(SqlException ex)     at Microsoft.TeamFoundation.Framework.Server.TeamFoundationSqlResourceComponent.Execute(ExecuteType executeType, CommandBehavior behavior)     at Microsoft.TeamFoundation.Framework.Server.ExtendedAttributeComponent.ReadDatabaseAttribute(String attributeName)     at Microsoft.TeamFoundation.Framework.Server.DatabaseConnectionValidator.ValidateDatabaseInstanceStamp(String configConnectionString, Guid configInstanceId)     at Microsoft.TeamFoundation.Framework.Server.DatabaseConnectionValidator.ValidateApplicationConfiguration(String configConnectionString, Guid configInstanceId, List`1 sqlInstances, String analysisInstance, String analysisDatabaseName, Boolean ignoreAnalysisDatasourceUpdateErrors, Boolean autoFixConfiguration, Boolean fullValidation, DatabaseVerificationDatabaseTypes continueOnMissingDatabaseTypes)     at Microsoft.TeamFoundation.Framework.Server.ApplicationServiceHost..ctor(Guid instanceId, String connectionString, String physicalDirectory, String plugInDirectory, String virtualDirectory, Boolean failOnInvalidConfiguration)     at Microsoft.TeamFoundation.Framework.Server.JobApplication.SetupInternal()     at Microsoft.TeamFoundation.Framework.Server.JobServiceUtil.RetryOperationsUntilSuccessful(RetryOperations operations)    Inner Exception Details:    Exception Message: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (type SqlException)  SQL Exception Class: 20  SQL Exception Number: -1  SQL Exception Procedure:   SQL Exception Line Number: 0  SQL Exception Server:   SQL Exception State: 0  SQL Error(s):    Exception Data Dictionary:  HelpLink.ProdName = Microsoft SQL Server  HelpLink.EvtSrc = MSSQLServer  HelpLink.EvtID = -1  HelpLink.BaseHelpUrl = http://go.microsoft.com/fwlink  HelpLink.LinkId = 20476      Exception Stack Trace:    at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)     at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, Boolean withFailover)     at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, Boolean withFailover)     at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString connectionOptions, SqlCredential credential, TimeoutTimer timeout)     at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(TimeoutTimer timeout, SqlConnectionString connectionOptions, SqlCredential credential, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance)     at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions)     at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions)     at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnectionPool pool, DbConnectionOptions options, DbConnectionPoolKey poolKey, DbConnectionOptions userOptions)     at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnectionOptions userOptions)     at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnectionOptions userOptions)     at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)     at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)     at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)     at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)     at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)     at System.Data.SqlClient.SqlConnection.Open()     at Microsoft.TeamFoundation.Framework.Server.TeamFoundationSqlResourceComponent.Execute(ExecuteType executeType, CommandBehavior behavior)  

Changing physical path on IIS through appcmd isn't activated

Posted: 05 May 2022 12:05 PM PDT

We have come across an issue on IIS 7.5 where we have a simple deploy system which consists of the following:

Create a zip-file of new webroot, consisting of three folders:

Api  Site  Manager  

This is unzipped into a new folder (let's say we call it "SITE_REV1"), and contains a script which invokes the following (one for each webroot):

C:\Windows\system32\inetsrv\appcmd set vdir "www.site.com/" -physicalPath:"SITE_REV1\Site"

This usually work, in 9/10 times. In some cases, the webroot seems to be updated correctly (if I inspect basic settings in IIS Manager, the path looks correct), but the running site in question is actually pointed to the old location. The only way we have managed to "fix it", is by running an IIS-reset. It isn't enough to recycle the application pool in question.

Sometimes it seems to even necessary be to make a reboot, but I'm not 100% sure that is accurate (it hasn't always been myself that was fixing the problem).

I rewrote the script using Powershell and the Web-Administration module, hoping that there was a glitch in appcmd, but the same issue occurs.

Set-ItemProperty "IIS:\Sites\www.site.com" -Name physicalPath -Value "SITE_REV1\Site"

Has anyone experienced something like this? Do anyone have a clue on what's going on, and what I can try and do to prevent this issue? Doing an IIS reset is not really a good option for us, because that would affect all sites on the server every time we try and deploy changes on a single site.

EDIT: We have identified that a start/stop of the site (NOT the application pool) in IIS Manager resolves the errorneous physical path, but if I stop the site using appcmd, change physical path, and then start it, I still suffer from the same issues. I'm at a blank...

top command occupied high cpu usage

Posted: 05 May 2022 11:00 AM PDT

My system is SUSE 10 and I observe that top occupies 57% CPU usage when I use it.

top with 57% CPU

I don't have too many processes:

ps -eLf | wc -l  106  

Here are top's stats:

cat /proc/2913/stat  2913 (top) R 2879 2913 2879 34819 2913 8396800 411 0 0 0 60648 199580 0 0 17 0 1 516504552 4811013274 2383872 285 4294967295 134512640 134596384 3215474448 3215470376 3085449998 0 0 0 138047495 0 0 0 17 3 0 0 0      cat /proc/2913/status   Name:   top  State:  R (running)  SleepAVG:       79%  Tgid:   2913  Pid:    2913  PPid:   2879  TracerPid:      0  Uid:    0       0       0       0  Gid:    0       0       0       0  FDSize: 256  Groups: 0   VmPeak:     2360 kB  VmSize:     2328 kB  VmLck:         0 kB  VmHWM:      1144 kB  VmRSS:      1140 kB  VmData:      260 kB  VmStk:        84 kB  VmExe:        84 kB  VmLib:      1788 kB  VmPTE:        16 kB  Threads:        1  SigQ:   2/16383  SigPnd: 0000000000000000  ShdPnd: 0000000000000000  SigBlk: 0000000000000000  SigIgn: 0000000000000000  SigCgt: 00000000083a7007  CapInh: 0000000000000000  CapPrm: 00000000fffffeff  CapEff: 00000000fffffeff  Cpus_allowed:   00000000,00000000,00000000,0000000f  Mems_allowed:   1    ## cat /proc/2913/statm  582 285 213 21 0 86 0  

What can I do next to find the reason why the top command is using so much CPU?

Smoothwall Express interface issues

Posted: 05 May 2022 10:03 AM PDT

I have a SmoothWall Express box that is currently configured with a Green and Purple interface. Both interfaces are in the same /24 subnet (which seems odd to me). The green interface (address of .254) has a DHCP server that is pushing addresses from .1 to .100 and the purple interface (.253) is pushing addresses from .101 to .120. Every machine here is trusted, and as such is connected to the green interface via a wired connection or wireless APs. Nothing is connected at all (port is physically empty, traffic graphs show no activity) to the purple interface. However, every machine here is pulling addresses from the purple interface. So the question boils down to, how do I remove/stop my machines from pulling from the purple dhcp interface? Also, shouldn't the purple interface (if we were using it for guest Wifi or something) be on a different subnet (i.e. 192.168.100.0/24 instead of 192.168.1.0/24 with all the trusted machines)?

No comments:

Post a Comment