Saturday, February 19, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


ClientAliveInterval is not closing the idle ssh connection

Posted: 19 Feb 2022 10:50 PM PST

I have the task to close the idle ssh connection if they are idle for more than 5 minutes. I have tried setting these value on sshd_config

TCPKeepAlive no  ClientAliveInterval 300  ClientAliveCountMax 0  

But nothing seems to work the idle remains active and does not get lost even after 5 minutes of idle time.

Then I came across this https://bbs.archlinux.org/viewtopic.php?id=254707 they guys says

These are not for user-idle circumstances, they are - as that man page excerpt notes - for unresponsive SSH clients. The client will be unresponsive if the client program has frozen or the connection has been broken. The client should not be unresponsive simply because the human user has stepped away from the keyboard: the ssh client will still receive packets sent from the server.

I can't even use TMOUT because there are ssh client scripts that do not run bash program.

How to achieve this

Openssh version OpenSSH_8.2p1 Ubuntu-4ubuntu0.4, OpenSSL 1.1.1f 31 Mar 2020

How to set CORS header in cloudflare workers?

Posted: 19 Feb 2022 09:46 PM PST

I'm using cloudflare workers to create a reverse proxy but I can't use it to embed on main domain cause it gives CORS error:

Access to image at 'https://example.workers.dev/96384873_p0.png' from origin 'https://example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.  

here's the code in workers

addEventListener("fetch", event => {    let url = new URL(event.request.url);    url.hostname = "i.pximg.net";    let request = new Request(url, event.request);    event.respondWith(      fetch(request, {        headers: {          'Referer': 'https://www.pixiv.net/',          'User-Agent': 'Cloudflare Workers'        }      })    );  });  

How can I fix the CORS error?

"centos dhcpd server include external file" failed

Posted: 19 Feb 2022 09:30 PM PST

this is /etc/dhcp/dhcpd.conf

allow booting;  allow bootp;    ignore client-updates;  set vendorclass = option vendor-class-identifier;    include /etc/dhcpd-reservations.conf;  

/etc/dhcpd-reservations.conf

shared-network managed {    interface "eth1";    subnet {{test ip range}} netmask 255.255.255.0 {    option routers {{test router ip}};    option broadcast-address {{test broadcast ip}};      group {      host ztp-dis { hardware ethernet 50:01:00:09:00:00; fixed-address {{test ip}}; }    }   }  }  

but I get,,

DHCPDISCOVER from 50:01:00:09:00:00 via {{ip}}: network managed: no free leases  

If I dont use external file, this config is normal working.

How can I install the Windows Deployment Services (WDS) PowerShell module to a Windows 10 client computer for remote administration

Posted: 19 Feb 2022 09:28 PM PST

Microsoft themselves recommend administrating services (I mean "services" in the general sense, not in the Microsoft technical sense of a "service" on a Windows computer - E.g. The "Spooler" service) such as Active Directory via their Remote Server Administration Tools (RSAT) installed on a local administration workstation, as opposed to an alternative such as remoting onto a server via RDP and interracting directly with a service.

RSAT does not include the WDS tools. Various workarounds exist for transplanting the MMC snap-in to an administrative workstation, but I've not been able to find any for the WDS PowerShell module.

Any assistance with enabling this ability will be greatly appreciated.

How to delete a domain from a Domain Controller, but keep the subdomains?

Posted: 19 Feb 2022 06:31 PM PST

If I have a domain : corporate.com

With 3 subdomains :

  • dev.corporate.com
  • voice.corporate.com
  • vpn.corporate.com

How can I delete the top domain, but keep the 3 child zones?

Specifically for Windows DNS, this would look something like this : enter image description here

I suspect there is no easy way to do this, which means I manually have to edit the entire zone file to end up with 3 separate zone files for the child zones. But I'm hoping I'm wrong. Any ideas?

Connecting to Amazon RDS using SSH client

Posted: 19 Feb 2022 07:38 PM PST

I have created an Amazon RDS instance and I want to connect to it using an SSH client (puTTY).

I am following this document:

  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which you want to connect.

  3. Choose Configuration.

  4. Note the Resource ID value. For example, the resource ID might be db-ABCDEFGHIJKLMNOPQRS0123456.

  5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

  6. In the navigation pane, choose Instances.

  7. Find the name of your EC2 instance, and choose the instance ID associated with it. For example, the EC2 instance ID might be i-abcdefghijklm01234.

I am confused, because I cannot see any EC2 instance which is created for the RDS instance. Am I supposed to created an additional EC2 instance here, to connect to the RDS instance?

Note: I am able to connect to the RDS using a SQL client (MySQL Workbench). Here I am trying to connect to the server using a SSH client.

How can you de-couple DNS server from the AD Domain Controller?

Posted: 19 Feb 2022 06:18 PM PST

I have an environment where Active Directory Domain Controllers host their own DNS domains (as is common).

However we are trying to separate DNS and host it on a standalone server (to eventually move to Linux Bind, but for now just the decoupling)

I have tested this in a lab environment but can't get the decoupling to work.

Step One - Basic Setup

  • Create an AD zone "mylab.com"
  • Add a domain controller "server1.mylab.com"
  • AD can update the domain perfectly fine

Step Two - Move out DNS zone

  • Backup and delete the entire zone "mylab.com"
  • Create a Conditional Forwarder for "mylab.com" pointing to standalone DNS server
  • Manually create a new zone "mylab.com" on the standalone DNS server
  • Allow Insecure Updates on the standalone server (On Bind it would be 'allow-update ACL')

Step Three - Test DNS Updates from AD to Standalone

  • Restart NetLogon Service

this should trigger the DC to create all the AD related DNS records on "mylab.com" hosted on the new Standalone DNS server.

but I don't see any attempts of DNS updates on the standalone DNS server logs.

I do see DNS queries coming in from the DC, but no updates)

How to set time source on Hyper-V guest to VMIC

Posted: 19 Feb 2022 05:20 PM PST

I'd like to set the time source on my Hyper-V hosted Windows Server 2022 member server to VM IC Time Synchronization Provider. Currently it's at Local CMOS Clock.

It was set to PDC.DOMAIN.local, but I wanted it to get its time from the host instead of the PDC. So, based on this answer, I ran these commands:

net stop w32time  w32tm /unregister  w32tm /register  net start w32time  w32tm /config /syncfromflags:NO /update  net stop w32time  net start w32time  

At that point w32tm /query /source started returning Local CMOS Clock. I left it alone overnight thinking it'd straighten itself out, as it had done under the same scenario on a Windows 10 Enterprise VM, but I had no such luck this morning. The Win10 VM is still (correctly) reporting a source of VM IC Time Synchronization Provider, but the server is (incorrectly) reporting Local CMOS Clock.

How can I set my VM-hosted server to get its time from the host?

Service bound to ipv6 on linux machine - Can I disable IPV6 and access service on my IPV4 address?

Posted: 19 Feb 2022 04:08 PM PST

I am attempting to install Tableau Server on Ubuntu 18.04 and have the management service running on port 8850. I am unable to access the service at that port on my IPV4 address as it seems to be on my IPV6 address.

These are my listening ports:

systemd-r   898        systemd-resolve   13u  IPv4   14795      0t0  TCP 127.0.0.53:53 (LISTEN)                                                                                         sshd       1825                   root    3u  IPv4   29682      0t0  TCP *:22 (LISTEN)                                                                                                  sshd       1825                   root    4u  IPv6   29684      0t0  TCP *:22 (LISTEN)                                                                                                  appzookee  2824                tableau  242u  IPv6   25299      0t0  TCP *:8707 (LISTEN)                                                                                                appzookee  2824                tableau  247u  IPv6   31376      0t0  TCP *:8715 (LISTEN)                                                                                                appzookee  2824                tableau  248u  IPv6   39195      0t0  TCP *:8843 (LISTEN)                                                                                                lmgrd      3202                tableau    0u  IPv6   35120      0t0  TCP *:27000 (LISTEN)                                                                                               clientfil  3292                tableau  252u  IPv6   54347      0t0  TCP *:8844 (LISTEN)                                                                                                clientfil  3292                tableau  253u  IPv6   52374      0t0  TCP *:8235 (LISTEN)                                                                                                activatio  3354                tableau  341u  IPv6   51249      0t0  TCP *:8645 (LISTEN)                                                                                                tabadminc  3674                tableau  413u  IPv6   62007      0t0  TCP *:8850 (LISTEN)                                                                                                tabadmina  3866                tableau  389u  IPv6   45962      0t0  TCP *:8438 (LISTEN)                                                                                                tabadmina  3866                tableau  394u  IPv6   58312      0t0  TCP *:8206 (LISTEN)  

Is it possible to disable IPV6 and access that service on my IPV4 address? Thanks

why apache in my centos7 container does not work?

Posted: 19 Feb 2022 05:54 PM PST

Apache does not work after moving centos7 container to the new Kali Linux environment.

Apache in centos7 container that worked fine in my old Kali Linux(2018) does not work at my new Kali Linux (2021.1).

#new kali-linux  Linux kali 5.10.0-kali7-amd64 #1 SMP Debian 5.10.28-1kali1 (2021-04-12) x86_64 GNU/Linux    #centos container  CentOS Linux release 7.5.1804 (Core)  

Since the systemctl cannot be used in centos7 container, the container is started with the --privileged option. In old Kali, apache daemon starts successfully. But in new Kali, it failed with error "Failed to get D-Bus connection: No such file or directory" despite appending the --privilege option.

So I installed docker-systemctl-replacement on centos7 container with following command.

curl https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py > /usr/bin/systemctl    

And then, systemctl now works but apache doesn't start.

[root@ff31f2d81ec1 httpd]# systemctl start httpd  [root@ff31f2d81ec1 httpd]# systemctl status httpd  httpd.service - The Apache HTTP Server      Loaded: loaded (/usr/lib/systemd/system/httpd.service, enabled)      Active: failed (failed)  

So I tried to run apache manually, but it dies soon.

[root@ff31f2d81ec1 httpd]# /usr/sbin/httpd  [root@ff31f2d81ec1 httpd]# ps aux  USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND  root           1  0.0  0.0  42688  3828 ?        Ss   15:04   0:00 /sbin/init  root           7  0.0  0.0  13436  3348 pts/0    Ss   15:05   0:00 bash  root          41  0.0  0.0      0     0 ?        Zs   15:09   0:00 [httpd] <defunct>  root         197  0.0  0.0      0     0 ?        Zs   15:51   0:00 [httpd] <defunct>  root         198  0.0  0.0  53324  3892 pts/0    R+   15:51   0:00 ps aux    

How do I solve this problem? I don't know why it doesn't start because there is no log file in /var/log/httpd.

Lost ssh access to Google Cloud VM

Posted: 19 Feb 2022 04:04 PM PST

I have a VM (Debian) running on Google Cloud Platform, but I can't connect via ssh or serial console (can't create an user via startup-script for some reason). Already tried a bunch of troubleshooting guides in order to fix it.

I was using the ssh connection previously with no problems at all. The website and databases running on that VM are still working.

I've tried

1 - Checked if firewall entry "default-allow-ssh" exists

2 - Tried connecting with a different user using cmd

gcloud compute ssh another-username@$PROB_INSTANCE  

3 - Added metadata "startup-script" key with value:

#! /bin/bash  useradd -G sudo USER  echo 'USER:PASS' | chpasswd  

Rebooted (also tried interrupt/start), tried connecting via serial console but it says the login is incorrect. The startup script is not working or not creating my user.

4 - Increased disk size.

5 - Increased memory (upgraded the VM instance type).

6 - Removed ssh keys from both VM details and Metadata tabs, followed by a reboot:

After removing I've tried to generate keys again using command:

gcloud beta compute ssh INSTANCE_NAME -- -vvv   

but it returns:

No zone specified. Using zone [us-east1-b] for instance: [INSTANCE_NAME].  Updating project ssh metadata...⠏Updated [https://www.googleapis.com/compute/beta/projects/PROJECT_NAME].  Updating project ssh metadata...done.  Waiting for SSH key to propagate.  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  USER@IP_ADDRESS: Permission denied (publickey).  

More details

Running

gcloud beta compute ssh --zone ZONE INSTANCE_NAME --project PROJECT_NAME  

returns:

USER@IP_ADDRESS: Permission denied (publickey).  

Running (a second time, after waiting for propagation)

gcloud beta compute ssh INSTANCE_NAME -- -vvv   

returns:

[...]  OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020  debug1: Reading configuration data /home/USER/.ssh/config  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 19: Applying options for *  debug2: resolve_canonicalize: hostname IP_ADDRESS is address  debug2: ssh_connect_direct  debug1: Connecting to IP_ADDRESS [IP_ADDRESS] port 22.  debug1: Connection established.  debug1: identity file /home/USER/.ssh/google_compute_engine type 0  debug1: identity file /home/USER/.ssh/google_compute_engine-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2  debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4p1 Debian-10+deb9u7  debug1: match: OpenSSH_7.4p1 Debian-10+deb9u7 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002  debug2: fd 3 setting O_NONBLOCK  debug1: Authenticating to IP_ADDRESS:22 as 'USER'  debug1: using hostkeyalias: compute.INSTANCE_ID  debug3: hostkeys_foreach: reading file "/home/USER/.ssh/google_compute_known_hosts"  debug3: record_hostkey: found key type ECDSA in file /home/USER/.ssh/google_compute_known_hosts:1  debug3: load_hostkeys: loaded 1 keys from compute.INSTANCE_ID  debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521  debug3: send packet: type 20  debug1: SSH2_MSG_KEXINIT sent  debug3: receive packet: type 20  debug1: SSH2_MSG_KEXINIT received  debug2: local client KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-grou  p14-sha256,diffie-hellman-group14-sha1,ext-info-c  debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com  ,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com,zlib  debug2: compression stoc: none,zlib@openssh.com,zlib  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug2: peer server KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-grou  p14-sha256,diffie-hellman-group14-sha1  debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com  debug2: compression stoc: none,zlib@openssh.com  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug1: kex: algorithm: curve25519-sha256  debug1: kex: host key algorithm: ecdsa-sha2-nistp256  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug3: send packet: type 30  debug1: expecting SSH2_MSG_KEX_ECDH_REPLY  debug3: receive packet: type 31  debug1: Server host key: ecdsa-sha2-nistp256 SHA256:Or8[...]  debug1: using hostkeyalias: compute.INSTANCE_ID  debug3: hostkeys_foreach: reading file "/home/USER/.ssh/google_compute_known_hosts"  debug3: record_hostkey: found key type ECDSA in file /home/USER/.ssh/google_compute_known_hosts:1  debug3: load_hostkeys: loaded 1 keys from compute.INSTANCE_ID  debug1: Host 'compute.INSTANCE_ID' is known and matches the ECDSA host key.  debug1: Found key in /home/USER/.ssh/google_compute_known_hosts:1  debug3: send packet: type 21  debug2: set_newkeys: mode 1  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug3: receive packet: type 21  debug1: SSH2_MSG_NEWKEYS received  debug2: set_newkeys: mode 0  debug1: rekey after 134217728 blocks  debug1: Will attempt key: /home/USER/.ssh/google_compute_engine RSA SHA256:brI3[...] explicit  debug2: pubkey_prepare: done  debug3: send packet: type 5  debug3: receive packet: type 7  debug1: SSH2_MSG_EXT_INFO received  debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>  debug3: receive packet: type 6  debug2: service_accept: ssh-userauth  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug3: send packet: type 50  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey  debug3: start over, passed a different list publickey  debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password  debug3: authmethod_lookup publickey  debug3: remaining preferred: keyboard-interactive,password  debug3: authmethod_is_enabled publickey  debug1: Next authentication method: publickey  debug1: Offering public key: /home/USER/.ssh/google_compute_engine RSA SHA256:brI3[...] explicit  debug3: send packet: type 50  debug2: we sent a publickey packet, wait for reply  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey  debug2: we did not send a packet, disable method  debug1: No more authentication methods to try.  USER@IP_ADDRESS: Permission denied (publickey).  ERROR: (gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255].  

Update

Followed Alex's suggestions and the serial port output returns:

Welcome to [1mDebian GNU/Linux 9 (stretch)[0m!    [    2.364319] systemd[1]: No hostname configured.  [    2.365157] systemd[1]: Set hostname to <localhost>.  [    3.142016] systemd[1]: google-shutdown-scripts.service: Cannot add dependency job, ignoring: Unit google-shutdown-scripts.service is masked.  [    3.144581] systemd[1]: google-clock-skew-daemon.service: Cannot add dependency job, ignoring: Unit google-clock-skew-daemon.service is masked.  [    3.147589] systemd[1]: google-instance-setup.service: Cannot add dependency job, ignoring: Unit google-instance-setup.service is masked.  [    3.149799] systemd[1]: google-accounts-daemon.service: Cannot add dependency job, ignoring: Unit google-accounts-daemon.service is masked.  [    3.152485] systemd[1]: google-startup-scripts.service: Cannot add dependency job, ignoring: Unit google-startup-scripts.service is masked.  

I really hope there is a fix :/

I'd appreciate any help or tips, Thanks!

How to remove the trailing slashes from a URL with nginx

Posted: 19 Feb 2022 09:04 PM PST

I'm trying to remove trailing slashes from urls. I searched a lot and tried some solutions but they didn't work form me.

I tried this one

rewrite ^/(.*)/$ /$1 permanent;

but it leaves one slash at the end (example.com/ or example.com/post/) but I need example.com and example.com/post

Also I tried this solution

if ($request_uri ~ (.*?\/)(\/+)$ )   {    return 301 $scheme://$host$1;  }  

and it's one of the best but it also leaves one slash at the end.

And also I was getting an error in the console after all tries like this:

GET http://example.com/post 404 (Not Found)  

I'm new to nginx and doesn't know a lot, how can I achieve redirects from urls with trailing slashes?

Mapping a network drive from Windows 10 to Windows 2016 Server

Posted: 19 Feb 2022 08:07 PM PST

I can ping the IP address of Windows 2016 server from my Windows 10 local computer. I can also connect remotely to Windows 2016 server as an Administrator with full privileges. I created a folder named ImportantDocs in C folder and granted share privilege of read/write to everyone.

Now when I try to map a network drive from my local desktop by giving a drive letter Z and then giving in field \\ipaddress of remote desktop\ImportantDocs I get error like -

The mapped network drive could not be created because the following error has occured:

We can't sign you in with this credential because your domain isn't available. Make sure your device is connected to your organization's network and try again. If you previously signed in on this device with another credential, you can sign in with that credential

How to resolve this?

ps command output does not display wchan values in fedora

Posted: 19 Feb 2022 05:57 PM PST

I am running fedora 5.3.12-200.fc30.x86_64 and testing simple tcp client/server codes. I tried running different ps commands but I can not get any value displayed in wchan field, although in my case both server and client are in suspended states.


edited

After further testing it looks like it depends on distribution. Kali and devuan compute wchan value, while fedora and OpenSUSE Tumbleweed do not. Does anyone have a clue why that might be the case? Only entry in kernel that i found corresponds to wchan value

(CONFIG_SCHED_OMIT_FRAME_POINTER=y)

is configured the same way in kali, tumbleweed and fedora. I could not find that parameter in devuan.


Following is output of ps command on fedora. No matter what I do wchan only shows that process is running. I tried running different ps commads displaying all processes but wchan values never show anything else but hyphen.

$ ps -o pid,ppid,wchan=WIDE-WCHAN-COLUMN -o comm -o stat -t pts/6 -t pts/7     PID   PPID WIDE-WCHAN-COLUMN COMMAND         STAT   58565   4247 -                 bash            Ss  102840  58565 -                 su              S  102848 102840 -                 bash            S  103048   4247 -                 bash            Ss  122844 102848 -                 tcpserv01       S+  122848 103048 -                 tcpcli01        S+  122849 122844 -                 tcpserv01       S+  

I checked wchan in /proc and all I get is 0.

$ cat /proc/122844/wchan  0  

Server's strace does not pass accept() which is exactly what I expected.

# strace -p 122844  strace: Process 122844 attached  accept(3,   

Client's strace is blocked at read() as expected.

# strace -p 122848  strace: Process 122848 attached  read(0,   

But they don't show in wchan. What am I missing?



On a side note I also have FreeBSD (VM) on same machine and in FreeBSD 12.0-RELEASE wchan shows correctly when using ps command, so I am pretty sure this has something to do with fedora.

$ ps aux -o pid,wchan=WIDE-WCHAN-COLUMN -o comm -o stat  USER  PID  %CPU %MEM   VSZ   RSS TT  STAT STARTED      TIME COMMAND          PID WIDE-WCHAN-COLUMN COMMAND          STAT  root   11 599.0  0.0     0    96  -  RNL  13:01   360:26.86 [idle]            11 -                 idle             RNL  root    0   0.0  0.0     0   528  -  DLs  13:01     0:00.01 [kernel]           0 swapin            kernel           DLs  root    1   0.0  0.0  9952  1016  -  ILs  13:01     0:00.01 /sbin/init --      1 wait              init             ILs  root    2   0.0  0.0     0    16  -  DL   13:01     0:00.00 [crypto]           2 crypto_w          crypto           DL  


EDIT I found following in man ps

-n Set namelist file. Identical to N. The namelist file is needed for a proper WCHAN display, and must match the current Linux kernel exactly for correct output. Without this option, the default search path for the namelist is: $PS_SYSMAP...

So I've set

PS_SYSMAP=/boot/System.map-$(uname -r)  

But I still do not get any output from wchan. If I try running same command as before but with -n I get

$ ps -n -o pid,ppid,wchan=WIDE-WCHAN-COLUMN -o comm -o stat -t pts/2 -t pts/3 -t pts/4     PID   PPID WIDE-WCHAN-COLUMN COMMAND         STAT    4830   4829                 - bash            Ss    6201   4829                 - bash            Ss    6251   6201                 - tcpserv01       S+    6252   4829                 - bash            Ss    6292   6251                 - tcpse <defunct> Z+    6356   6252                 - tcpcli01        S+    6357   6251                 - tcpserv01       S+    6481   4830                 - ps              R+  

With -n option wchan does not even show hyphen as before.




EDIT 2 Answer to following question is no. Kali's kernel configured that parameter exactly as fedora, but in Kali wchan values are computed. OpenSuse Tumbleweed behaves just like fedora, does not compute wchan values. Devuan computes wchan.

Could missing wchan values be due to

CONFIG_SCHED_OMIT_FRAME_POINTER: Single-depth WCHAN output

which in my kernel is configured as

CONFIG_SCHED_OMIT_FRAME_POINTER=y

Make IIS server max_file_upload size 10GB

Posted: 19 Feb 2022 10:00 PM PST

I am working on a website that is hosted with the company's own server, using IIS. I was wondering if it's possible to make the make file upload size 10GB? Their assets they send and receive to clients are generally between 5GB to 10GB.

Just wondering what changes are necessary to achieve this (changing upload_max_filesize, post_max_size, memory_limit). I am going to try to raise the limit tomorrow.

Best,

Matt

APT/DPKG is broken - Unable to remove mysql-server-core-5.5

Posted: 19 Feb 2022 11:01 PM PST

I want to completely remove any packages related to mysql on my server. But I seems unable to achieve that task. APT seems unable to understand that mysql-server is not installed on the server. Is it possible to manually tell apt that a package is removed?

╭─root@home /etc/apt  ╰─➤  apt-get remove mysql-server-core-5.5                                                                                Reading package lists... Done  Building dependency tree  Reading state information... Done  You might want to run 'apt-get -f install' to correct these:  The following packages have unmet dependencies:  mysql-server-5.5 : Depends: mysql-client-5.5 (>= 5.5.54-0+deb8u1) but it is not going to be installed                  Depends: mysql-server-core-5.5 (>= 5.5.54-0+deb8u1) but it is not going to be installed  E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a  solution).  

I also have tried to perform a "purge"

╭─root@home /etc/apt  ╰─➤  apt-get purge mysql-server-core-5.5                                                                             100 ↵  Reading package lists... Done  Building dependency tree  Reading state information... Done  You might want to run 'apt-get -f install' to correct these:  The following packages have unmet dependencies:   mysql-server-5.5 : Depends: mysql-client-5.5 (>= 5.5.54-0+deb8u1) but it is not going to be installed                      Depends: mysql-server-core-5.5 (>= 5.5.54-0+deb8u1) but it is not going to be installed  E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).  

Running a apt-get -f install does not solve any problem

dpkg: error processing package mysql-server-5.5 (--configure):   subprocess installed post-installation script returned error exit status 6  Processing triggers for libc-bin (2.19-18+deb8u7) ...  Errors were encountered while processing:   mysql-server-5.5  E: Sub-process /usr/bin/dpkg returned an error code (1)  

Running a apt-get install mysql-server-5.5 --reinstall does not with either.

╭─root@home /etc/apt  ╰─➤  apt-get install mysql-server-5.5 --reinstall                                                                    100 ↵  Reading package lists... Done  Building dependency tree  Reading state information... Done  0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.  2 not fully installed or removed.  After this operation, 0 B of additional disk space will be used.  E: Internal Error, No file name for mysql-server-5.5:amd64  

Trying to remove the manually with dpkg --purge --force-all mysql-community-server mysql-server-5.5 mysql-server-core-5.5 is useless

╭─root@home /etc/apt  ╰─➤  dpkg --purge --force-all mysql-community-server mysql-server-5.5 mysql-server-core-5.5                            1 ↵  (Reading database ... 40739 files and directories currently installed.)  Removing mysql-community-server (5.7.17-1debian8) ...  Purging configuration files for mysql-community-server (5.7.17-1debian8) ...  ................  dpkg: error processing package mysql-community-server (--purge):   subprocess installed post-removal script returned error exit status 1  Removing mysql-server-5.5 (5.5.54-0+deb8u1) ...  Failed to stop mysql.service: Unit mysql.service not loaded.  invoke-rc.d: initscript mysql, action "stop" failed.  dpkg: error processing package mysql-server-5.5 (--purge):   subprocess installed pre-removal script returned error exit status 5  Failed to stop mysql.service: Unit mysql.service not loaded.  invoke-rc.d: initscript mysql, action "stop" failed.  Failed to start mysql.service: Unit mysql.service failed to load: No such file or directory.  invoke-rc.d: initscript mysql, action "start" failed.  dpkg: error while cleaning up:   subprocess installed post-installation script returned error exit status 6  dpkg: warning: ignoring request to remove mysql-server-core-5.5 which isn't installed  Errors were encountered while processing:   mysql-community-server   mysql-server-5.5  

Removing the packages one by one does not work either, I am prompted with the "dialog" that want me to set a root password for the mysql-server.

apt-get remove mysql-server apt-get remove mysql-client apt-get remove mysql-server-core

I will try my best to update this question as you wish, but I am currently pulling my hair out for this. I am almost about to just reinstall the whole server.

Cannot access files in docker as a non-root user 777 permissions + facls

Posted: 19 Feb 2022 06:02 PM PST

I have a docker container with a web app. Apache cannot read to the log folder. The apache user has specific rwx on the facl folder. I set 0777 on the folder recursivelyinside and outside the container. Inside the container only root can read the files. Outside everyone can. Inside the container an ls from the apache user looks like:

-????????? ? ? ? ?            ? access_log  -????????? ? ? ? ?            ? app.log  -????????? ? ? ? ?            ? error_log  

I ran a strace, which produced nothing I could find useful. Here is an strace of open,access,lstat for completeness.

[www-data@a377ecbb9c76 www]$ strace -e open,access,lstat ls -l /var/www/logs/  access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)  open("/etc/ld.so.cache", O_RDONLY)      = 3  open("/lib64/libselinux.so.1", O_RDONLY) = 3  open("/lib64/librt.so.1", O_RDONLY)     = 3  open("/lib64/libcap.so.2", O_RDONLY)    = 3  open("/lib64/libacl.so.1", O_RDONLY)    = 3  open("/lib64/libc.so.6", O_RDONLY)      = 3  open("/lib64/libdl.so.2", O_RDONLY)     = 3  open("/lib64/libpthread.so.0", O_RDONLY) = 3  open("/lib64/libattr.so.1", O_RDONLY)   = 3  open("/proc/filesystems", O_RDONLY)     = 3  open("/usr/lib/locale/locale-archive", O_RDONLY) = 3  open("/usr/share/locale/locale.alias", O_RDONLY) = 3  open("/usr/share/locale/en_US.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en_US.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en_US/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en/LC_TIME/coreutils.mo", O_RDONLY) = 3  open("/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 3  lstat("/var/www/logs/", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0  lstat("/var/www/logs/", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0  open("/etc/nsswitch.conf", O_RDONLY)    = 3  open("/etc/ld.so.cache", O_RDONLY)      = 3  open("/lib64/libnss_files.so.2", O_RDONLY) = 3  open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 3  open("/etc/group", O_RDONLY|O_CLOEXEC)  = 3  open("/var/www/logs/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3  lstat("/var/www/logs/error_log", 0xf17800) = -1 EACCES (Permission denied)  open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = 4  ls: cannot access /var/www/logs/error_logopen("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)  : Permission denied  lstat("/var/www/logs/app.log", 0xf178c0) = -1 EACCES (Permission denied)  ls: cannot access /var/www/logs/app.log: Permission denied  lstat("/var/www/logs/access_log", 0xf17980) = -1 EACCES (Permission denied)  ls: cannot access /var/www/logs/access_log: Permission denied  total 0  open("/etc/localtime", O_RDONLY)        = 3  -????????? ? ? ? ?            ? access_log  -????????? ? ? ? ?            ? app.log  -????????? ? ? ? ?            ? error_log  +++ exited with 1 +++  

Nginx conf template not working with dokku

Posted: 19 Feb 2022 06:02 PM PST

I am stuck with the following nginx conf template:

# Special characters - dollar signs, spaces inside of quotes, etc. -  # should be escaped with a single backslash or can cause deploy failures.    server {      listen      [::]:80;      listen      80;      server_name $NOSSL_SERVER_NAME;      access_log  /var/log/nginx/${APP}-access.log;      error_log   /var/log/nginx/${APP}-error.log;        # set a custom header for requests      # add_header X-Served-By www-ec2-01;        location    / {        proxy_pass  http://$APP;        proxy_http_version 1.1;        proxy_set_header Upgrade \$http_upgrade;        proxy_set_header Connection "upgrade";        proxy_set_header Host \$http_host;        proxy_set_header X-Forwarded-Proto \$scheme;        proxy_set_header X-Forwarded-For \$remote_addr;        proxy_set_header X-Forwarded-Port \$server_port;        proxy_set_header X-Request-Start \$msec;      }        include $DOKKU_ROOT/$APP/nginx.conf.d/*.conf;        # Proxy download       location ~* ^/internal_redirect/(.*?)/(.*) {      # Do not allow people to mess with this location directly      # Only internal redirects are allowed      internal;        # Location-specific logging      access_log logs/internal_redirect.access.log main;      error_log logs/internal_redirect.error.log warn;        # Extract download url from the request      set $download_uri \$2;      set $download_host \$1;        # Compose download url      set $download_url http://\$download_host/\$download_uri;        # Set download request headers      proxy_set_header Host \$download_host;      proxy_set_header Authorization '';        # The next two lines could be used if your storage       # backend does not support Content-Disposition       # headers used to specify file name browsers use       # when save content to the disk      proxy_hide_header Content-Disposition;      add_header Content-Disposition 'attachment; filename="\$args"';        # Do not touch local disks when proxying       # content to clients      proxy_max_temp_file_size 0;        # Download the file and send it to client      proxy_pass \$download_url;    }  }  

The dokku docs tell me to escape '$' with a single \, so I did that.

Can a nginx wiz tell a nginx n00b what is wrong with the above template?

Dokku outputs the following error:

remote: nginx: [emerg] unknown log format "main" in /home/dokku/everseller/nginx.conf:117  remote: nginx: configuration file /etc/nginx/nginx.conf test failed  

Thanks!

==== updated conf ====

# Special characters - dollar signs, spaces inside of quotes, etc. -  # should be escaped with a single backslash or can cause deploy failures.    server {    listen      [::]:80;    listen      80;    server_name $NOSSL_SERVER_NAME;    access_log  /var/log/nginx/${APP}-access.log;    error_log   /var/log/nginx/${APP}-error.log;      # set a custom header for requests    # add_header X-Served-By www-ec2-01;      location    / {      proxy_pass  http://$APP;      proxy_http_version 1.1;      proxy_set_header Upgrade $http_upgrade;      proxy_set_header Connection "upgrade";      proxy_set_header Host $http_host;      proxy_set_header X-Forwarded-Proto $scheme;      proxy_set_header X-Forwarded-For $remote_addr;      proxy_set_header X-Forwarded-Port $server_port;      proxy_set_header X-Request-Start $msec;    }      include $DOKKU_ROOT/$APP/nginx.conf.d/*.conf;      # Proxy download     location ~* ^/internal_redirect/(.*?)/(.*) {    # Do not allow people to mess with this location directly    # Only internal redirects are allowed    internal;      # Location-specific logging    access_log logs/internal_redirect.access.log main;    error_log logs/internal_redirect.error.log warn;      # Extract download url from the request    set $download_uri $2;    set $download_host $1;      # Compose download url    set $download_url http://$download_host/$download_uri;      # Set download request headers    proxy_set_header Host $download_host;    proxy_set_header Authorization '';      # The next two lines could be used if your storage     # backend does not support Content-Disposition     # headers used to specify file name browsers use     # when save content to the disk    proxy_hide_header Content-Disposition;    add_header Content-Disposition 'attachment; filename="$args"';      # Do not touch local disks when proxying     # content to clients    proxy_max_temp_file_size 0;      # Download the file and send it to client    proxy_pass $download_url;      }  }  

Why doesn't logstash grab or index the files from the mapped drive?

Posted: 19 Feb 2022 09:04 PM PST

I don't understand why logstash is so finicky with network resources. I shared a folder on another machine and then mapped it as Z: under Windows Explorer. I've verified the path and everything. I can get logstash (with ELK stack) to input local files but it just doesn't seem to do anything with network or mapped resources.

Is there something insanely simple I'm missing here? Do I need additional arguments for outputting mapped drive inputs to elasticsearch?

input {   file {    type => "BbLog"    path => "Z:/*"  }  }    output {    elasticsearch {      host => "localhost"    }  }  

Rewrite query string to path params

Posted: 19 Feb 2022 04:04 PM PST

I have the following configuration of nginx that hosts my image service:

    upstream thumbor {          server localhost:8888;      }        server {          listen       80;          server_name  my.imageserver.com;          client_max_body_size 10M;          rewrite_log on;          location ~ /images {               if ($arg_width="10"){                  rewrite ^/images(/.*)$ /unsafe/$1 last;              }              rewrite ^/images(/.*)$ /unsafe/$1 last;          }          location ~ /unsafe {              proxy_set_header X-Real-IP $remote_addr;              proxy_set_header HOST $http_host;              proxy_set_header X-NginX-Proxy true;                proxy_pass http://thumbor;              proxy_redirect off;          }            location = /favicon.ico {              return 204;              access_log     off;              log_not_found  off;          }      }  

I am trying to rewrite the following urls:

from

my.imageserver.com/images/Jenesis/EmbeddedImage/image/jpeg/jpeg/9f5d124d-068d-43a4-92c0-1c044584c54a.jpeg

to

my.imageserver.com/unsafe/Jenesis/EmbeddedImage/image/jpeg/jpeg/9f5d124d-068d-43a4-92c0-1c044584c54a.jpeg

which is quite easy, the problem begins when I want to allow query string that should go to the path of the url like so:

from

my.imageserver.com/images/Jenesis/EmbeddedImage/image/jpeg/jpeg/9f5d124d-068d-43a4-92c0-1c044584c54a.jpeg?width=150&height=200&mode=smart

to

/my.imageserver.com/unsafe/150x200/smart/Jenesis/EmbeddedImage/image/jpeg/jpeg/9f5d124d-068d-43a4-92c0-1c044584c54a.jpeg

Also it will be better if the order of the query strings won't matter.

I tried using: $arg_width but it didn't seem to work.

Using nginx 1.6.1 on ubuntu.

Help would be much much appreciated.

Limit access on Apache 2.4 to ldap group

Posted: 19 Feb 2022 07:06 PM PST

I've upgraded from Ubuntu 12.04 LTS to 14.04 LTS, and suddenly, my Apache 2.4 (previous: Apache 2.2) now lets everybody in to my virtual host, which is unfortunate :-).

What am I doing wrong? Anything with the Order/Allow lines? Any help is greatly appreciated!

Here's my current config;

<VirtualHost *:443>      DavLockDB /etc/apache2/var/DavLock      ServerAdmin admin@mydomain.com      ServerName foo.mydomain.com      DocumentRoot /srv/www/foo        Include ssl-vhosts.conf        <Directory /srv/www/foo>              Order allow,deny              Allow from all                Dav On                Options FollowSymLinks Indexes              AllowOverride None              AuthBasicProvider ldap              AuthType Basic              AuthName "Domain foo"              AuthLDAPURL "ldap://localhost:389/dc=mydomain,dc=com?uid" NONE              AuthLDAPBindDN "cn=searchUser, dc=mydomain, dc=com"              AuthLDAPBindPassword "ThisIsThePwd"              require ldap-group cn=users,dc=mydomain,dc=com                <FilesMatch '^\.[Dd][Ss]_[Ss]'>                      Order allow,deny                      Deny from all              </FilesMatch>                <FilesMatch '\.[Dd][Bb]'>                      Order allow,deny                      Deny from all              </FilesMatch>      </Directory>        ErrorLog /var/log/apache2/error-foo.log        # Possible values include: debug, info, notice, warn, error, crit,      # alert, emerg.      LogLevel warn        CustomLog /var/log/apache2/access-foo.log combined    </VirtualHost>  

yum says mod_cluster is installed, but the files are not there

Posted: 19 Feb 2022 05:03 PM PST

RHEL 6.5 + JBoss EAP 6.

I installed JBoss EAP 6 from the RHEL repos using grouinstall:

# yum groupinstall "JBoss EAP 6"  

This seems to have worked fine except that the files for mod_cluster are not actually installed, even though yum say mod_cluster is installed. I've tried reinstalling the entire group, reinstalling just mod_cluster, clearing yum caches etc. Still digging but so far I am at a loss and RHEL support hasn't been helpful as yet.

Slowloris on Apache: is mod_reqtimeout + mod_qos enough?

Posted: 19 Feb 2022 08:07 PM PST

I detected few days ago that my server was under slowloris attack (I found a lot of "-" 408 0 "-" "-" values in my access.log).

I changed my configuration like this:

In mod_reqtimeout:

RequestReadTimeout header=5-20,minrate=20  

I installed mod_qos and configured it like that:

QS_SrvMaxConnPerIP 50  QS_SrvMinDataRate 120 1500  

Is it enough? Most of the available tutorial just leave the default values in the configuration files.

I noticed that now the "-" 408 0 "-" values are increased a lot. I suppose that's good because it means that more connection are detected as malicious and it means that they are closed befaure they can "damage" the server. Right?

Can I do something more? Blocking the ips?...

Thanks in advance for any feedbacks!

Mapped Drives not connecting immediately after hard boot

Posted: 19 Feb 2022 11:01 PM PST

We have an issue with one computer (Win8) at work where after a cold boot the mapped drives show up but don't connect. This is not normally an issue as the user can enter the drives (despite the red X), but in this instance, the user uses software that accesses the mapped drive and the software/Windows wont allow it, as not being connected. This means that the user will have to restart the computer where the soft boot will connect the drives automatically.

We have all the user mapped drives connected through a bat file.

This would probably indicate the mapped drives trying to connect before the actual network connection being connected. We tried this fix:

Local Computer Policy > Computer Configuration > Administrative Templates > System > Logon > Enable: Always wait for the network at computer startup and logon

It worked for a few days but now the issue has creeped in again.

Any ideas?

Setting up default SSL site on IIS8

Posted: 19 Feb 2022 09:20 PM PST

I have setup few websites on IIS8 all using the same wildcard SSL certificate. Some of the sites need to be accessible to older browsers and operating systems, therefore I cannot use the "Require Server Name Indication" option.

Since SNI is not supported by all devices, IIS is showing the following alert:

"No default SSL site has been created. To support browsers without SNI capabilities, it is recommended to create a default SSL site."

How do I create a default SSL site? The closest article I found is not very clear, and I have the feeling that there must be an easier solution.

Server details: Windows Server 2012, IIS8, One external IP address

Error code 0x80070035. The network path was not found

Posted: 19 Feb 2022 10:00 PM PST

My system is connected with a local LAN connection with 30 PC's. I'm not able to access the shared drive in the network, but I'm able to ping the IP address in which the drive is present. I have checked to start all the services, which are to be started and i have check the TCP/UDP ports also, but even after that I'm not able to access the drive, the same error message is being displayed again and again. Please help me to rectify the problem. I'm trying to solve the problem for the last 1 week. I have tried various solutions which are present in various web site, but I'm not able to find a proper solution. So please help me.

Get list of transferred files from rsync?

Posted: 19 Feb 2022 11:10 PM PST

I'm currently using rsync on a script that deploys a PHP application from a staging to a production server. Here is how:

rsync -rzai --progress --stats --ignore-times --checksum /tmp/app_export/ root@app.com:/var/www/html/app/  

This is currently outputting a list of every file that's being compared (every file in the project), but I'd like it to output only the modified ones, so I can run it with a --dry-run option to check that every deploy is updating only the desired files.

NOTE: The best I could do so far is grep fcst the results, but I'm looking for an rsync option that I'm sure it's there but I can't find it in the man pages.

Thanks in advance!

Cyrus with SASL authentication keeps appending hostname

Posted: 19 Feb 2022 05:03 PM PST

I am currently in the process of setting up a new Cyrus mailserver and running into quite a funny paradox. I am trying to use the auxprop pw_check mechanism to let Cyrus read /etc/sasldb2 for user authentication.

For some reason, when creating a new user, saslpasswd2 keeps appending my hostname to the username I am creating. This is in no way a problem, it only implies I will need to have my users login with username@mydomain.org.

This is where the fun starts. When I try to authenticate via an IMAP client to my Cyrus server, Cyrus logs 'badlogin: mydomain.org [127.0.0.1] plaintext pieter SASL(-13): user not found: checkpass failed'. Cyrus seems to strip off the @mydomain.org part, as it is configured to be the default hostname.

This leaves me in the predicament of being unable to create users that can authenticate to Cyrus. Has anyone else faced this problem?

apache tomcat IE caching problem

Posted: 19 Feb 2022 07:06 PM PST

I'm running apche2 and tomcat6 both on port 80 with mod_jk setup on ubuntu servers(8.10,9.10). Tomcat is being used for serving jsp pages. I've a small problem with the IE browser which doesn't cache but just reload all the images(jpg|png|css) when refreshed the page which is not happing with the other browsers. I also tried appending the following in the apache config file but no change.

<IfModule mod_expires.c>      ExpiresActive On      ExpiresByType image/jpg "access plus 1 month"      ExpiresByType image/jpeg "access plus 1 month"      ExpiresByType image/gif "access plus 1 month"      ExpiresByType image/png "access plus 1 month"      ExpiresByType image/css "access plus 1 month"      ExpiresByType text/html "access plus 1 month"   </IfModule>  

/etc/apache2/apache2.conf file:

Alias / /var/www/  ErrorDocument 503 /maintenance.html  ErrorDocument 404 /maintenance.html  JkMount / myworker  JkMount /* myworker  JkMount /*.jsp myworker  JkUnMount /*.html myworker      <VirtualHost *:80>  ServerName station1.mydomain.com  DocumentRoot /usr/share/tomcat/webapps/myapps1          JkMount /* myworker          JkUnMount /*.html myworker  </VirtualHost>      <VirtualHost *:80>  ServerName station2.mydomain.com  DocumentRoot /usr/share/tomcat/webapps/myapps2          JkMount /* myworker      JkMount /*.html myworker  </VirtualHost>  

Anybody has any trick to make IE cache and not to reload all the images everytime?

How can I set environment variable for just one command in fish shell?

Posted: 19 Feb 2022 07:38 PM PST

In bash, I can do EDITOR=vim crontab -e. Can I get similar effect in Fish shell?

No comments:

Post a Comment