Saturday, June 19, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Is there a (forensic) way to list network or internet addresses acessed in the past by a Windows 10 system?

Posted: 19 Jun 2021 05:32 AM PDT

There is this allegedly trojan-infected Windows 10 PC described in more detail here.

Below is a scan result of the PC (disk image) in question using Autopsy/The Sleuth Kit. Autopsy/The Sleuth Kit scan

Is there any way to check for online activity (other than browser history) of any process on this PC during a specific time period in the past?

AWS Cloudfront is returning different responses for identical queries

Posted: 19 Jun 2021 04:47 AM PDT

I have an API that is supposed to use accept a website header supplied by the client to respond differently depending on which website the request is for.

I have whitelisted this header in AWS Cloudfront. To my understanding this should mean that Cloudfront includes it in the cache key.

When I repeat identical curl calls to my endpoint I get different results back from Cloudfront.

The response headers always indicate a cache hit from Cloudfront, but the response body is sometimes for the wrong website. In other words, Cloudfront does not appear to be including the website header in the cache key and is returning the response body for a different request key.

Here is a sample output from the script below:

$ tail -f one.txt  23245  -  x-cache: Hit from cloudfront  56138  -  x-cache: Hit from cloudfront  56138  -  x-cache: Hit from cloudfront  56138  -  x-cache: Hit from cloudfront  23245  -  x-cache: Hit from cloudfront  

Notice that the "total" is different (this is a JSON key in the response I get back)

I'm using a script to repeat the calls so I expect the request to be identical.

Why is Cloudfront sometimes returning the wrong response?

I am certain that the origin is always returning the correct response for the website header. I've verified this by running my script against the origin without Cloudfront in front of it, and I've also verified that my origin is not being hit when I run this script against Cloudfront.

How can I debug this further? I thought that perhaps I could use "via" to see if one particular edge node was always returning the wrong response, but that didn't work.

#!/bin/bash    files=("one-totals.txt" "two-totals.txt")  for i in "${files[@]}"  do     rm $i >& /dev/null  done    callWebsiteOne () {      curl --location --request GET 'https://my-api.example.com' \      --header 'Authorization: Bearer 123abc' \      --header 'website: one' \      -i > temp.txt      total=$(cat temp.txt | sed s/[^{]*// | jq -r .total)      edgenode=$(cat temp.txt | grep via:)      echo $total " - " $edgenode >> one-edges.txt      echo $total >> one-totals.txt      rm temp.txt  }    callWebsiteTwo () {      curl --location --request GET 'https://my-api.example.com' \      --header 'Authorization: Bearer 123abc' \      --header 'website: two' \      -i > temp.txt      total=$(cat temp.txt | sed s/[^{]*// | jq -r .total)      edgenode=$(cat temp.txt | grep via:)      echo $total " - " $edgenode >> two.txt      echo $total >> two-totals.txt      rm temp.txt  }    callRandomWebsite (){          random=$[RANDOM%4+1]      case $random in          1)              callWebsiteOne              ;;          2)              callWebsiteTwo              ;;      esac  }    for value in {1..100}  do      callRandomWebsite      sleep 0.25s  done    for i in "${files[@]}"  do      unique=$(sort $i | uniq | wc -l)      total=$(cat $i | wc -l)      echo $i " has " $unique " unique values in " $total " total lines"  done  

Recommended way to ping from within container

Posted: 19 Jun 2021 04:11 AM PDT

Recently I was configuring some containers with non typical networking settings (at least non typical for me) and so I was having some problems (finalny I was able to make it working so no worries). One thing that was making "debbuging" harder was that many public images doesn't contain "ping" util, and this is normal for building docker images (policy "less" is better).

In case when container have connection to internet but have some networking problems (like with local network) you can just install ping temporarily, if there is no internet then what would be the easiest way to debug networking from inside container, maybe some docker build in functions or maybe use build script (dockefile + compose) to add ping utils during creation?

How to benchmark and analyze a network protocol prototype?

Posted: 19 Jun 2021 03:16 AM PDT

We are currently working with an academic network protocol that modifies and partly encrypts IPv6 packets and establishes circuits to allow sourceless routing.

We got the prototype running, and it works with IPv6 messages if we put the message payload directly in to IP packet payload (e.g. send a hello world). We can, however, not use well-established tools as ping or iperf3, as the messages receive the destination, but no replies are sent.

We are wondering whether we can benchmark some features of the prototype. As far as I see it, it does not make any sense to benchmark packet loss, as the protocol itself does not introduce reasons from packet loss other than a node on the route being taken offline. Also, it does not really seem to make sense to measure data throughput, as this is subject to the link between the two parties. The protocol itself also does not introduce reasons for jitter, because all messages are handled the same, thus again this would be a network-related attribute. The latency is also mostly due to network-related issues, but what we could measure is the time the prototype needs to modify a message. Currently, we are running it on VMs. It uses iptables rules to intercept packets and pass them on to nfqueue, which modifies the packets using python.

I proposed to do a theoretical analysis instead were we calculate the additional bytes that are added on top of regular IPv6-packets, try to calculate the additional performance costs (how?), and try to narrow done, which attacks are feasible and which not, in respect to regular IPv6.

  • What features make sense to benchmark?
  • Apart from packet size and performance costs, what else could be theoretically analyzed?

P.S.: I hope it fits into this caytegory, since it does not seem to fit into network engineering

Can not connect to a Google Cloud TPU using ssh (putty) in Windows

Posted: 19 Jun 2021 02:43 AM PDT

I have a google v3-8 TPU, i can't figure out how to connect to it using ssh in windows. I did every guide there is, but the connection just times out.

What i tried (among others): 1. Dos command line: gcloud config set compute/zone europe-west4-a

gcloud config set account myusername@gmail.com

gcloud config set project myprojectname

gcloud services enable tpu.googleapis.com

gcloud alpha compute tpus tpu-vm ssh --zone europe-west4-a vm_name

This just opens Putty which then timeouts.

  1. Create a pub/priv key using Puttygen and adding the public key in the (2a)Google Cloud Platform website > Compute Engine Metadata > SSH keys Adding the same key to ~/.ssh/google_compute_engine.pub , and private key to ~/.ssh/google_compute_engine on the TPU VM using the web-console of GCP. (The key files where empty) When connecting with putty it timeouts

  2. Did step 1. but with this as the last line. gcloud alpha compute tpus tpu-vm ssh --zone europe-west4-a vm_name --ssh-key-file=C:\Users\my_username\Documents\putty_keys\gc (there are three files, gcloud adds the extensions, gc.pub with the public key, gc with the private key, and gc.ppk ) Putty does not connect.

  3. Did the reverse, created the ssh-keys on the TPU server using ssh-gen renamed keys to ~/.ssh/google_compute_engine.pub , and private key to ~/.ssh/google_compute_engine Copy pasted them to putty-gen to convert to windows putty keys, added keys using (2a) Connected putty to vm outside-ip and nothing.

  4. Created the ssh-keys on the TPU server using ssh-gen adding them to ~/.ssh/authorized_keys and installing them with ssh-copy-id and entering password Copied the keys to puttygen and used them to connect to VM ip.

I did more to connect, but to no avail.

What is the right way to connect to a TPU VM ? Note, it's not a Compute VM, it's a TPU. not the same settings as a VM in GCP console, so no nice add ssh key in the edit settings, because there ARE NO edit settings in GCPC.

I'm at a loss.

PS threw away the TPU instance and recreated it after each step, to make sure i wasnt messing things up too badly.

Force all traffic through vpn

Posted: 19 Jun 2021 02:49 AM PDT

Goal: force all traffic through VPN only.

Client: Windows in VM VPN: OPENVPN

I delete the 0.0.0.0 route in the client. I make a route for the destination of my VPN server with my LAN default gateway as the gateway (192.168.1.1). So, in practice when I turn on openvpn, it attempts to connect to the server IP which has a route through my local LAN gateway, which would result in a connection and a new VPN connection established. And when the VPN connection drops, all traffic stops.

However, I am unable to connect to the VPN server. I can ping it though. I was able to replicate the same scenario in a windows VM with softether client and a third party VPN and can connect successfully. What am I doing wrong?

Redirect url to the CDN storage

Posted: 19 Jun 2021 02:34 AM PDT

I have structure folders on my nginx server: Image URL

https://example.com/wp-content/uploads/image.jpg

https://example.com/wp-content/uploads/WP-data/data/employees/per-005/9032.jpg

I want it redirect to my CDN and get the picture on that CDN all time..

https://cdn.example.com/image.jpg

https://cdn.example.com/WP-data/data/employees/per-005/9032.jpg

I try to use with this setting but it doesn't work.

location ~ ^(/wp-content/themes|/wp-content/uploads)/.*\.(jpe?g|gif|css|png|js|ico|pdf|m4a|mov|mp3)$ {               rewrite ^ http://cdn.example.com$request_uri?               permanent;               access_log off;        }  

No Internet connection on a VPS

Posted: 19 Jun 2021 05:25 AM PDT

I've a VPS hosted at cba.pl. I have the problem that I can't connect to the internet with it and i'm just able to use the Terminal over their homepage, but commands like wget and curl are giving connection errors as well.

Now my question is, how can I get the internet again?

(P.S. This answer doesn't work for me cause i haven't installed ifupdown yet.)

High availability VM Windows server

Posted: 19 Jun 2021 02:55 AM PDT

How can I create a highly available VM running the Active directory role and DNS role. If one server goes down I would like it to fail over to the other node where clients within the domain will still be able to resolve their ip address

Update MySQL replication server position

Posted: 19 Jun 2021 03:50 AM PDT

I have been trying to setup a replication of a large DB (90GB)

I have created a backup using mysqldump --single-transaction I have then restored that on the replication server I have then enabled replication, but I have accidentally clicked Reset slave in phpMyAdmin and its set the position back to basically 0.

So if my understanding is correct, its trying to rebuild the DB on the replication server from the beginning.

Because it kept erroring out on duplicates that already existed in the db (because I restored the backup first), I have temporality added

slave-skip-errors=1062  skip-slave-start  

to the my.ini file to skip the duplicated. But even after this, its about 4 months behind the master (9983704 seconds).

Is there a way I can move the position on the slave up so it only rebuild from the last few days?

How to install roundcube on debian 10?

Posted: 19 Jun 2021 04:45 AM PDT

I need to install roundcube on my web server, and im a bit newbie in this type of things, since im bad when it comes to servers things, so i need some help. Was reading some guides that i found, but no one matched with the things i need, or, at least, i couldnt see any help in that guides, in some of them doesnt even use apache, but ngix, i already have the apache server and need to ise it, or they even use everything (webserver and mailserver) in the same server, in my situation i dont need that, and my situation is as follows.

I need to install roundcube on my apache web server (web.server.al1), while i use the mail of other server (mail.server.al1) with one postfix and dovecot so i need to install mariadb again on my webserver, right?(i already had the a db for the webserver, but its better to have the roundcube db on the same web server, right? i dont know what is best)

Last of all i need ofc to be able to use the mails i have o mail.server.al1, (user@domain1.al1, and user2@final.fi1)

Thats all, i bet that my question is too vague, bot if anyone can help me i would be really gratefull

Thanks for reading and sorry for my bad english, im a bit rough when writting long texts in another language

EDIT:

i already installed roundcube and works, but has a fail, even when i can receibe amails and log with any user, when i try to send one i get the message "enter at least one recipient"

Piping SSH to wireshark on windows

Posted: 19 Jun 2021 03:51 AM PDT

In my day-to-day operations, I frequently need to execute tcpdump's on remote servers, and it's a pain to save the output to a file and then have to move the file to my laptop to analyze it on wireshark.

I was exploring the command below, and it works fine in linux

ssh <remote_host> sudo tcpdump -vv -i eth0 -U -w - | wireshark -k -i -  

But, unfortunately, my work laptop that is provided by my company has windows on it, and they don't allow me to change to another OS. Given this restriction, I was trying to achieve the same result, but in windows...

If i execute the following command in windows in a powershell

ssh <remote_host> sudo tcpdump -vv -i eth0 -U -w - | 'C:\Program Files\Wireshark\Wireshark.exe' -k -i -  

I get this error

    At line:1 char:87  + ...  -i eth0 -U -w - | 'C:\Program Files\Wireshark\Wireshark.exe' -k -i -  +                                                                   ~~  Unexpected token '-k' in expression or statement.  At line:1 char:44  + ...  -i eth0 -U -w - | 'C:\Program Files\Wireshark\Wireshark.exe' -k -i -  +                        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  Expressions are only allowed as the first element of a pipeline.  At line:1 char:90  + ...  -i eth0 -U -w - | 'C:\Program Files\Wireshark\Wireshark.exe' -k -i -  +                                                                      ~~  Unexpected token '-i' in expression or statement.      + CategoryInfo          : ParserError: (:) [], ParentContainsErrorRecordException      + FullyQualifiedErrorId : UnexpectedToken  

If I execute the wireshark command without the ssh part I get the same error, but if I execute it like this

& 'C:\Program Files\Wireshark\Wireshark.exe' -k -i -  

It opens wireshark and waits for data input. With this in mind I tried to change the command to

ssh <remote_host> sudo tcpdump -vv -i eth0 -U -w - | & 'C:\Program Files\Wireshark\Wireshark.exe' -k -i -  

This way the ssh command gets executed and the tcpdump starts in the remote host, the wireshark never starts. What am I doing wrong? Why is the piped command that is most similar to the one in linux doesnt work in windows, is piping different?

Juniper EX4200 Stack with PFSense DHCP (Discover/Offer Loop)

Posted: 19 Jun 2021 12:36 AM PDT

I'm currently struggling with my Juniper Switch Stack.

Topology is like this Topology

The Client Ports on the Stack are configured as tagged-access with dot1x (multiple supplicant) and they switch according to the Radius authentication. This works without a problem and VLANs get correctly assigned.

The 2 PFSense firewalls do provide one DHCP instance for every VLAN in failover configuration with an CARP IP on the same subnet as the VLAN. So no DHCP Relay is needed.

Windows clients can obtain an IP and work correctly but Linux clients and PXE boot do not.

From tcpdump and Wireshark we see a DHCP Discover/Offer loop on the Linux clients. The offer reaches the client but the client does not send a DHCP Request. We tried multiple Linux derivatives and PXE implementations but without any luck. We also compared the Wireshark captures from Windows and Linux and there is absolutely no difference.

Any suggestions on how to track down the problem?

Thanks in advance.

Update:

Just to add more information.

The IP assignment flow is like this:

  1. Client starts up (NIC connects to Switch stack)
  2. Switch authenticates the Client against the Radius Server
  3. Radius Server answers with Accept and VLAN ID 940
  4. Switch stack assigns VLAN 940 to the Port the Client is connecting in multiple supplicant mode
  5. Clients sends out DHCP Discover
  6. DHCP Server (both PFSense) respond with an offer.
  7. Client sends a DHCP Request
  8. DHCP Server sends an DHCP ACK

So obviously 1-6 is working. The Client gets assigned to VLAN 940 through the Radius Server, sends out a DHCP discover, both PFSense have a DHCP instance configured for the VLAN 940 (IP Range 10.94.0.1-200/24) and they send an offer.

This is a tcpdump on one of the PFsense firewalls in case it helps.

18:55:25.538580 IP (tos 0x0, ttl 20, id 3, offset 0, flags [none], proto UDP (17), length 576)          0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 00:19:99:f7:3d:23 (oui Unknown), length 548, xid 0x99f73d23, secs 18, Flags [Broadcast] (0x8000)                Client-Ethernet-Address 00:19:99:f7:3d:23 (oui Unknown)                Vendor-rfc1048 Extensions                  Magic Cookie 0x63825363                  DHCP-Message Option 53, length 1: Discover                  Parameter-Request Option 55, length 36:                    Subnet-Mask, Time-Zone, Default-Gateway, Time-Server                    IEN-Name-Server, Domain-Name-Server, RL, Hostname                    BS, Domain-Name, SS, RP                    EP, RSZ, TTL, BR                    YD, YS, NTP, Vendor-Option                    Requested-IP, Lease-Time, Server-ID, RN                    RB, Vendor-Class, TFTP, BF                    Option 128, Option 129, Option 130, Option 131                    Option 132, Option 133, Option 134, Option 135                  MSZ Option 57, length 2: 1260                  GUID Option 97, length 17: 0.72.178.216.253.99.205.17.226.190.154.221.134.53.14.178.59                  ARCH Option 93, length 2: 0                  NDI Option 94, length 3: 1.2.1                  Vendor-Class Option 60, length 32: "PXEClient:Arch:00000:UNDI:002001"                  END Option 255, length 0                  PAD Option 0, length 0, occurs 200            18:55:26.546900 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 334)          10.94.0.253.bootps > 255.255.255.255.bootpc: [udp sum ok] BOOTP/DHCP, Reply, length 306, xid 0x99f73d23, secs 18, Flags [Broadcast] (0x8000)                Your-IP 10.94.0.5                Server-IP 10.91.0.1                Client-Ethernet-Address 00:19:99:f7:3d:23 (oui Unknown)                file "pxelinux.0"                Vendor-rfc1048 Extensions                  Magic Cookie 0x63825363                  DHCP-Message Option 53, length 1: Offer                  Server-ID Option 54, length 4: 10.94.0.253                  Lease-Time Option 51, length 4: 600                  Subnet-Mask Option 1, length 4: 255.255.255.0                  Default-Gateway Option 3, length 4: 10.94.0.254                  Domain-Name-Server Option 6, length 8: 10.0.2.1,10.0.2.2                  Domain-Name Option 15, length 9: "domain.intra"                  NTP Option 42, length 4: 10.94.0.254                  TFTP Option 66, length 9: "10.91.0.1"                  END Option 255, length 0                        18:55:26.547180 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 334)          10.94.0.252.bootps > 255.255.255.255.bootpc: [udp sum ok] BOOTP/DHCP, Reply, length 306, xid 0x99f73d23, secs 18, Flags [Broadcast] (0x8000)                Your-IP 10.94.0.104                Server-IP 10.91.0.1                Client-Ethernet-Address 00:19:99:f7:3d:23 (oui Unknown)                file "pxelinux.0"                Vendor-rfc1048 Extensions                  Magic Cookie 0x63825363                  DHCP-Message Option 53, length 1: Offer                  Server-ID Option 54, length 4: 10.94.0.252                  Lease-Time Option 51, length 4: 600                  Subnet-Mask Option 1, length 4: 255.255.255.0                  Default-Gateway Option 3, length 4: 10.94.0.254                  Domain-Name-Server Option 6, length 8: 10.0.2.1,10.0.2.2                  Domain-Name Option 15, length 9: "domain.intra"                  NTP Option 42, length 4: 10.94.0.254                  TFTP Option 66, length 9: "10.91.0.1"                  END Option 255, length 0  

The Client sees the exact same but simply ignores it. Does it look wrong?

It just works if i do the same with a Linux VM on the Server side Switches (where the Radius Server is connected). So i'm pretty sure the problem is somewhere within the Juniper Switch Stack.

Update 2:

My assumption about a problem in the Switch Stack was right. It seems that "tagged-access" port mode does not behave as it should. Switching to "access" port mode did solve the problem. But it doesn't make much sense to me as "access" mode shouldn't be able to handle multiple supplicants in different VLANs, but it obviously does.

AWS App Runner deploy failing

Posted: 19 Jun 2021 04:55 AM PDT

I'm playing with AWS App Runner and I'm having issues with deployments/service updates. Half of the deployments fail with no apparent reason. A few minutes after the deployment is initiated, the app is actually deployed, but then, after 20-30 minutes, it rolls back to the previous version.

I've been trying to find something useful in cloudwatch, but no luck, it doesn't log anything (hey, Amazon, it would be reeealy useful to log the reason why the deployment failed) except for deployment started and deployment failed. The healthcheck endpoint is /, there's an apache2 instance in the container and it returns 200. I even set the healthcheck timeout to very generous 5s and 10 retries, but it didn't help.

I feel like I'm out of options. Nothing useful in the logs, the app inside the container is healthy.

Different drivers/support for 10GbE SFP+ copper versus fiber?

Posted: 19 Jun 2021 05:33 AM PDT

I'm looking at purchasing a storage array that uses 10/25 GbE SFP+ connections for the user-facing frontend network. I already have a 10 Gb switch with 24 RJ45 ports and 4 SFP+ ports. I assumed that the SFP+ ports wouldn't care whether the cable coming out of them was copper or fiber, so I thought we could just get SFP+ RJ45 transceivers for the storage array nodes. The vendor is saying that it won't work, because their storage nodes don't support SFP+ to copper connections. Does that sound right? Would they really have/need a different SFP+ driver if the connection is copper instead of fiber?

This Apple ID can't be used to make purchases - InTune/Apple Business Manager

Posted: 19 Jun 2021 12:03 AM PDT

We have just integrated InTune with Apple Business Manager and turned on the domain Federation which now allows our Azure AD users to log into Apple Devices with their work email address. We have hit an issue with this in that the users can no longer download apps from the App Store, or through the InTune Company portal. The users are presented with a message 'This Apple ID can't be used to make purchases'.

Hoping to get some assistance on this one. The main annoyance here is that the Company Portal cannot be downloaded from the app store without using a personal Apple ID. The secondary annoyance is that once the company portal is installed and the device is enrolled the apps configured through InTune also fail.

Bad Gateway The proxy server received an invalid response from an upstream server

Posted: 19 Jun 2021 05:04 AM PDT

I am trying to set the webapp using Apache (Server version: Apache/2.4.38 (Unix)) with SSL and Tomcat (Apache Tomcat/8.5.41)

3 tomcat instances are set as str1, str2, str3 with below settings with jvmroute respectively. Server.xml as:

Connector port="8988" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443"                Connector port="8010" protocol="AJP/1.3" redirectPort="8443"  Engine name="Catalina" defaultHost="localhost" jvmRoute="str1"  

In Httpd.conf with SSL module enabled and pointing httpd-ssl.conf

Listen 80  ServerName abcd.example.com:80  RewriteEngine On  RewriteCond %{SERVER_PORT} =80  RewriteRule (.*) https://abcd.example.com/search [R=301,L]  

In httpd-ssl.conf

Listen 443  SSLEngine on  ServerName abcd.example.com:443  

In Proxy-balancer.conf:

ProxyPass /search balancer://stcluster/search  ProxyPassReverse /search balancer://stcluster/search  <Proxy balancer://stcluster>      BalancerMember http://localhost:8988 loadfactor=1 route=str1      BalancerMember http://localhost:8987 loadfactor=1 route=str2      BalancerMember http://localhost:8986 loadfactor=1 route=str3      ProxySet lbmethod=bybusyness      ProxySet stickysession=JSESSIONID|jsessionid      ProxySet timeout=300  </Proxy>  

Make `systemd-run` fail gracefully if the unit name already exists?

Posted: 19 Jun 2021 04:37 AM PDT

I have found systemd-run which allows one to run processes in the background once off ("transient services"). I always specify the service unit name with --unit $NAME. But if I have already run the systemd-run command and my process is running, then systemd-run will fail, with non-zero exit code. Is there anyway to tell systemd-run to be more idempotent and not fail in this case?

Currently I'm doing:

 systemctl is-active $NAME || systemd-run --unit $NAME $COMMAND  

Is there a better way?

This is on Ubuntu 18.04, with the current systemd for that (version 237?)

Email sending issue with “localhost.localdomain”

Posted: 19 Jun 2021 03:04 AM PDT

I have a DigitalOcean server which runs Ubuntu 14.04. I am having problems with sending emails from the PHP mail() function (which uses sendmail internally).

I think the issue may be related to my hosts file config. Here is what I have in /etc/hosts at the moment:

127.0.0.1 localhost localhost.localdomain ip-xxx-xxx-xxx-xxx

And in etc/hostname:

ip-xxx-xxx-xxx-xxx

(In the above two, I replaced the IP address digits with x)

Now I have a domain pointing to this server, let's call this mydomain.com.

So when my website mydomain.com sends an email, the email is going in to the junk mail folder. I ran a test on https://www.mail-tester.com and one of the issues it flags up is:

enter image description here

I have tried adding mydomain.com to the line above in the hosts file but this results in the email arriving after a long time or not arriving at all.

Here are the Received headers:

Received: from localhost.localdomain (unknown [xxx.xxx.xxx.xxx])      (using TLSv1.2 with cipher xxx (256/256 bits))      (No client certificate requested)      by mail-tester.com (Postfix) with ESMTPS id xxx      for <test-xxx@mail-tester.com>; Mon, 29 Apr 2019 18:35:18 +0200 (CEST)    Received: from localhost.localdomain (localhost [127.0.0.1])      by localhost.localdomain (8.15.2/8.15.2/Debian-3) with ESMTP id xxx      for <test-xxx@mail-tester.com>; Mon, 29 Apr 2019 17:35:18 +0100    Received: from mydomain.com (www-data@localhost)      by localhost.localdomain (8.15.2/8.15.2/Submit) with SMTP id xxx      for <test-xxx@mail-tester.com>; Mon, 29 Apr 2019 17:35:18 +0100  

Can someone please advise what the issue may be and how to fix?

How to get SCCM client to evaluate policy immediately after OS deployment?

Posted: 19 Jun 2021 05:04 AM PDT

I have an SCCM OS deployment task sequence that works just fine -- with one caveat that I can't seem to figure out...

Once the task sequence completes, it takes anywhere from 4-16 hours to process its client settings. This means that freshly-imaged computers do not get any of their deployments or AV settings during that time. The SCCM client will eventually sync up with the server and when it does, everything works normally after that. But because of this issue, we basically have to let computers sit overnight before we can deliver them to users. Reimaging a wonky computer out in the field isn't an option unless we do it right before the user goes home for the day, so that it will be ready for them when they get in to work the next morning.

One particular issue is the Endpoint Protection client. On Windows 10 there is no way (that I know of) to put Windows Defender into managed mode since it's a built-in component of the operating system. We absolutely have to wait for the SCCM client to do its thing in order for that to process exclusions correctly (which are required for a particular application we use).

Here are the relevant details:

  • We're using SCCM 1710
  • This happens on all our images, in both Windows 7 and Windows 10.
  • No amount of manually triggering client actions in the Config Manager control panel makes it apply policy any faster.
  • Rebooting the computer in question makes no difference.
  • Everything works normally after the client finally syncs up. Deployments, software updates, and policy evaluations are all processed on schedule after that.
  • SCCM management console shows the client as installed and active.
  • Our SCCM hierarchy only has one site server with the DB, DP, MP, and SUP roles all running on it. All the boundary groups are configured correctly.
  • AD system and user discovery happens every 24 hours, with delta discovery enabled at 5 minutes.
  • No maintenance windows are defined on any of our collections (we are mostly a 24/7 operation). All deployments are set to ignore maintenance windows anyway.
  • Logs don't have errors or anything unusual in them (although I'll admit I'm not really sure what I am looking for there).

Is there any way to force the client to download and apply policy during the imaging process?


UPDATE:

I have traced this issue down to the discovery process on the server side.

When looking at an affected machine in the SCCM console, it shows that the client is installed, active, and healthy BUT Resource Explorer shows no data for it. SCCM does not know anything about the device -- what OS is installed, what hardware it has, what software is installed, what OU it's in... nothing.

All our collections are based on queries, so until data becomes available to query on, SCCM has no idea what collection it should be in, and therefore nothing gets advertised to it. The client should be populating this data to the server during its discovery cycle, but for some reason it isn't.

IF I go forcing AD system rediscovery, forcing collection member reevaluation, and manually triggering site actions on the client, THEN I can get SCCM to behave within an hour or so. But I'm really just mashing buttons randomly at this point. I don't know what combination of timing and ordering of actions is the magic sauce here.

  • AD system discovery is set to run every day with delta discovery set to 5 minutes.
  • Collection evaluations are set to run every 7 days, with delta discovery also enabled at 5 minutes.

But none of that makes sense because it doesn't take a full 24 hours to populate. If I image a machine up first thing in the morning, it will usually be ready by late afternoon, but discovery doesn't run until the middle of the night.

Also:

  • If I re-image an existing machine with the SAME OS, I've had success with getting the computer to evaluate correctly after an hour or so by simply triggering the site actions on the client. But this is because DB already had a record for those computers, and none of the information about them changed.

So does that updated information help anyone?

Google-authenticator with openvpn - AUTH: Received control message: AUTH_FAILED

Posted: 19 Jun 2021 02:00 AM PDT

I'm trying to set up MFA with Google authenticator for my OpenVPN setup on Ubuntu 16.04. Now OpenVPN works fine until I bring Google Authenticator into the mix.

My server.conf file reads as follows:

port 1194  proto udp  dev tun  ca ca.crt  cert server.crt  key server.key  dh dh2048.pem  server 10.0.0.0 255.255.255.0  ifconfig-pool-persist ipp.txt  push "redirect-gateway def1 bypass-dhcp"  client-to-client  keepalive 10 120  tls-auth ta.key 0  key-direction 0  cipher AES-128-CBC  auth SHA256  comp-lzo  user nobody  group nogroup  persist-key  persist-tun  status openvpn-status.log  log-append  openvpn.log  verb 3  plugin /usr/lib/openvpn/openvpn-plugin-auth-pam.so openvpn  reneg-sec 0  

My client.conf reads as follows:

client  dev tun  proto udp  remote 10.1.0.2 1194  resolv-retry infinite  nobind  user nobody  group nogroup  persist-key  persist-tun  remote-cert-tls server  comp-lzo  verb 3  cipher AES-128-CBC  auth SHA256  key-direction 1  script-security 2  up /etc/openvpn/update-resolv-conf  down /etc/openvpn/update-resolv-conf  auth-user-pass  auth-nocache  reneg-sec 0  

Also, in /etc/pam.d I have cloned common-accounts to create an openvpn file with the following lines:

account requisite                       pam_deny.so  account required                        pam_permit.so  auth requisite pam_google_authenticator.so  secret=/home/${USER}/.google_authenticator  

Now I have created the necessary user profiles for each client connecting to the VPN server, say client1, client2 and client3 on Ubuntu. Now, consider client1 is trying to connect to the VPN server. I am logged in as client1 on the client side system, and try to connect to the VPN Server.

I get the following ,

Enter Auth Username: ******  Enter Auth Password: ************* ( Password for local user profile? + OTP)  

After this point, I get

[server] Peer Connection Initiated with [AF_INET]10.1.0.2:1194  SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)  AUTH: Received control message: AUTH_FAILED  TCP/UDP: Closing socket  SIGTERM[soft,auth-failure] received, process exiting  

Now I wasn't sure why I was getting the AUTH failed error. I had seen many different ways in which the username/password combination could be input during the process of connecting to the VPN server.

    Method 1 - username ; password (local account password + OTP)      Method 2 - username ; password (local account password) +                 separate prompt section which asks for Google authenticator OTP      Method 3 - username ; OTP  

I was never prompted with a separate Google Authenticator prompt asking me for OTP separately. So I tried method 1 and tried method 2 expecting for a Google authenticator prompt which never showed up.

Question 1: What is the correct way to use Google Authenticator login credentials. Am I missing something here which might be why I do not get prompted for the OTP separately?

Another thing that I observed is that ,

sudo systemctl status openvpn@server  

gives different results for the two login methods above.

I got these status messages while trying different combination of password + OTP combinations.

openvpn(pam_google_authenticator)[15305]: Invalid verification code  openvpn(pam_google_authenticator)[15305]: Did not receive verification code from user  openvpn(pam_google_authenticator)[15305]: Failed to compute location of secret file  

Question 2: Can someone explain to me what these status messages mean in terms of my login inputs.

Question 3: How can I get the MFA up and running.

FYI I used libpam-google-authenticator. I did not follow the method which warranted using makefile and adding configuration parameters for pam.

Thanks!

Integrating squid with active directory

Posted: 19 Jun 2021 06:01 AM PDT

I am trying to integrate squid as a web proxy for my users in active directory. I have followed the tutorial in the squid site in here. When i run the command :

msktutil -c -b "CN=Administrator" -s HTTP/proxy.example.com -k /etc/squid3/PROXY.keytab \  --computer-name SQUIDPROXY-K --upn HTTP/proxy.example.com --server acdc.example.com --enctypes 28 --verbose  

i got the error :

SASL/GSSAPI authentication started    Error: ldap_sasl_interactive_bind_s failed (Local error)            additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Server not found in Kerberos database)    Error: ldap_connect failed.

The file /etc/squid3/PROXY.tab is not populated neither. I have searched all over the internet but i cant find anything about this problem.

Here are my config files :

/etc/krb5.conf

[logging]  default = FILE  kdc = FILE  admin_server = FILE    [libdefaults]      default_realm = DOMAIN.COM      dns_lookup_kdc = no      dns_lookup_realm = no      ticket_lifetime = 24h      default_keytab_name = /etc/squid3/PROXY.keytab        ; for Windows 2008 with AES  ;      default_tgs_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc des-cbc-md5  ;      default_tkt_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc des-cbc-md5  ;      permitted_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc des-cbc-md5        [realms]  DOMAIN.COM = {     default_domain = domain.com     kdc = acdc.domain.com     kdc = acdc2.domain.com     admin_server = acdc.domain.com  }    [domain_realm]          .domain.com = DOMAIN.COM          domain.com = DOMAIN.COM  

Here is the error output:

     -- init_password: Wiping the computer password structure   -- generate_new_password: Generating a new, random password for the computer account   -- generate_new_password:  Characters read from /dev/urandom = 84   -- create_fake_krb5_conf: Created a fake krb5.conf file: /tmp/.msktkrb5.conf-RoP6Kh   -- reload: Reloading Kerberos Context   -- finalize_exec: SAM Account Name is: SQUIDPROXY-K$   -- try_machine_keytab_princ: Trying to authenticate for SQUIDPROXY-K$ from local keytab...   -- try_machine_keytab_princ: Error: krb5_get_init_creds_keytab failed (Unsupported key table format version number)   -- try_machine_keytab_princ: Authentication with keytab failed   -- try_machine_keytab_princ: Trying to authenticate for host/routerdr from local keytab...   -- try_machine_keytab_princ: Error: krb5_get_init_creds_keytab failed (Client not found in Kerberos database)   -- try_machine_keytab_princ: Authentication with keytab failed   -- try_machine_password: Trying to authenticate for SQUIDPROXY-K$ with password.   -- create_default_machine_password: Default machine password for SQUIDPROXY-K$ is squidproxy-k   -- try_machine_password: Error: krb5_get_init_creds_keytab failed (Preauthentication failed)   -- try_machine_password: Authentication with password failed   -- try_user_creds: Checking if default ticket cache has tickets...   -- finalize_exec: Authenticated using method 4     -- ldap_connect: Connecting to LDAP server: acdc.progresscall.al try_tls=YES   -- ldap_connect: Connecting to LDAP server: acdc.progresscall.al try_tls=NO  SASL/GSSAPI authentication started  Error: ldap_sasl_interactive_bind_s failed (Local error)          additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Server not found in Kerberos database)  Error: ldap_connect failed  --> Is your kerberos ticket expired? You might try re-"kinit"ing.  --> Is DNS configured correctly? You might try options "--server" and "--no-reverse-lookups".   -- ~KRB5Context: Destroying Kerberos Context  

Amazon EC2 - Installed atop / htop but they don't show disk I/O stats

Posted: 19 Jun 2021 01:00 AM PDT

I configured an AWS EC2 with Amazon AMI Linux and installed via yum atop and htop and they work correctly but they don't show any stats for the disk I/O as usual.

I tried some startup options with no luck and also running them with 'sudo'...

Is there any way to make them show it?

Edit: htop 1.0.1 and atop 1.27-3, same versions I have on another "real" server and these work out of the box...

By "usual" I mean the % of the I/O of the disk in use, something like this from atop:

DSK |          sda  | busy      1%  | read       6  |               | write    268  | KiB/r      4  | KiB/w      7  | MBr/s   0.00  |               | MBw/s   0.19  | avq    19.30  | avio 0.34 ms  |  

Apache server-status: allow access only from one subdomain on same IP

Posted: 19 Jun 2021 03:04 AM PDT

I'm configuring server-status and get 404 error. I need allow access to server-status only from site subdomain on same IP. I put server-status location to virtualhost config, but it's not working.

If I put location into httpd.conf, server-status works on all subdomains.

NameVirtualHost 127.0.0.1:8082  <VirtualHost 127.0.0.1:8082>      ServerName tools.sitename.ru      RPAFenable On      RPAFsethostname Off      RPAFproxy_ips 127.0.0.1      #    RPAFheader X-Real-IP  #    AllowOverride All      DocumentRoot /var/www/tools.sitename        DirectoryIndex index.php index.html default.asp index.cgi      ErrorLog /var/log/httpd/tools.sitename.error.log      CustomLog /var/log/httpd/tools.sitename.access.log common        <Location /server-info>          SetHandler server-info          Order deny,allow          Deny from all          Allow from all      </Location>        <Location /server-status>      SetHandler server-status          Order deny,allow          Deny from all          Allow from all      </Location>        <Directory />          Options FollowSymLinks          AllowOverride All      </Directory>  </VirtualHost>  

nginx hidden file deny configuration?

Posted: 19 Jun 2021 02:00 AM PDT

I am using the below standard config to block download of hidden files from nginx :

#Prevent (deny) Access to Hidden Files with Nginx      location ~ /\. {              access_log off;              log_not_found off;               deny all;          }  

But this config is also blocking genuine requests like :

2013/10/09 17:24:46 [error] 20121#0: *593378 access forbidden by rule, client: XX.55.XXX.201, server: XYZ.org, request: "GET /vip/validate.php?id=dfddfQ&title=.Mytitle HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "xyz.org"  

How do I deploy files to Apache Tomcat in a similar fashion to Apache Webserver, ftp

Posted: 19 Jun 2021 04:03 AM PDT

I need to deploy some files to a Tomcat App Server, is it possible to access the root directory of an application, and upload files to a folder?

I have only used Apache WebServer thus far, and I can add files using something like filezilla to upload my website. In this case I just need to upload some files for download.

How can I setup a downloads folder, in tomcat?

SVN Lock - Unable to lock file from svn client - Tortoise client

Posted: 19 Jun 2021 01:00 AM PDT

We have a debian based SVN server with version 1.1.4-2(Pretty Old), When I try to lock a file it shows as below image, Nobody is able to lock any file. Can you please guide me how to solve this issue.I also followed the below client configuration but did not worked. I have attached the error image below.

To configure locking on TortoiseSVN, right-click on any folder and select Tortoise SVN > Settings.... Click the Edit button next to "Subversion configuration file" (tbd: add screenshot). In the Miscellany section, uncomment the following line:

enable-auto-props = yes  

by removing the '#' character at the beginning.In the auto-props section further down, add the line

*=svn:needs-lock

This will specify that locking be applied to all files. See other examples in the auto-props section of the configuration file if you want to apply locking to only a subset of files. [edit] Applying properties

If the above client configuration is performed before any files are added, all files will be under the locking policy. However, if there are already existing files in a repository that require locking, they must have the svn:needs-lock property applied. To add the property to all existing files using TortoiseSVN, right-click on the root folder of a repository's local working directory. Select TortoiseSVN > properties. Add the svn:needs-lock property, and apply it recursively. Click OK.

enter image description here

Is there anything that we need to add or change. Please help us.

How can I make my custom "name@example.com" e-mail address if I'm the owner of "example.com"

Posted: 19 Jun 2021 05:06 AM PDT

I have a ".com" domain for 2 years. The only thing that I can modify is the nameservers, ns1, ns2, and ns3.

How can I make my own e-mail address for this domain? Do I really need to buy hosting?

I don't have a host right now, but I intend to make an application on a Django host, probably a Debian server, or maybe on Google app engine.

Switch 302 redirect to 301 with Apache 2 ProxyPass in front of Tomcat 6

Posted: 19 Jun 2021 04:03 AM PDT

I'm trying to optimise my site for SEO and it seems as though their is a 302 direct in action for the http requests.

I'm hosting my app on a Tomcat 6 server which lies behind an Apache 2 server. I use the ProxyPass method (http://tomcat.apache.org/tomcat-6.0-doc/proxy-howto.html) to forward all requests to port 8080 (the port my app is hosted on). I've seen a lot of advice on how to set the redirect type when using the VirtualHost method but none to do with ProxyPass.

The app is a Struts app that forwards users on to index.jsp when they hit the base url. Could this also be the issue?

I'm grateful for any help on this one! Cheers!

Deciding on SSLRandomSeed values in Apache 2.2

Posted: 19 Jun 2021 06:01 AM PDT

What rules of thumb should you follow when choosing SSLRandomSeed startup and connect values in Apache?

No comments:

Post a Comment