Friday, April 9, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Troubleshooting block device mappings with Packer to increase size of EBS volume

Posted: 09 Apr 2021 08:01 PM PDT

I'd like to understand more about what to do when you need to increase a block device size for a build. I have some examples that work successfully with ubuntu 18, and centos 7 that I use, but it seems like each time I'm in a slightly new scenario the old techniques don't apply, and that is probably due to a gap in my knowledge of what is required.

Building an AMI with NICE DCV (remote display):

name: DCV-AmazonLinux2-2020-2-9662-NVIDIA-450-89-x86_64  

I tried this in my packer template:

  launch_block_device_mappings {      device_name           = "/dev/sda1"      volume_size           = 16      volume_type           = "gp2"      delete_on_termination = true    }  

But then ssh never became available. I reverted to not doing any customisation, and took a look at the mounts:

[ec2-user@ip-172-31-39-29 ~]$ df -h  Filesystem      Size  Used Avail Use% Mounted on  devtmpfs        482M     0  482M   0% /dev  tmpfs           492M     0  492M   0% /dev/shm  tmpfs           492M  740K  491M   1% /run  tmpfs           492M     0  492M   0% /sys/fs/cgroup  /dev/xvda1      8.0G  4.4G  3.7G  55% /  tmpfs            99M  8.0K   99M   1% /run/user/42  tmpfs            99M     0   99M   0% /run/user/1000  

So I thought I should try this instead (which didn't work):

  launch_block_device_mappings {      device_name           = "/dev/xvda1"      volume_size           = 20      volume_type           = "gp2"      delete_on_termination = true    }  

which resulted in these errors:

==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...  ==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Launching a source AWS instance...  ==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Adding tags to source instance      amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Adding tag: "Name": "Packer Builder"  ==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Error launching source instance: InvalidBlockDeviceMapping: Invalid device name /dev/xvda1  ==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami:         status code: 400, request id: 2c419989-dd53-48ef-bcb6-6e54e892d152  ==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: No volumes to clean up, skipping  ==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Deleting temporary security group...  ==> amazon-ebs.amazonlinux2-nicedcv-nvidia-ami: Deleting temporary keypair...  Build 'amazon-ebs.amazonlinux2-nicedcv-nvidia-ami' errored after 1 second 820 milliseconds: Error launching source instance: InvalidBlockDeviceMapping: Invalid device name /dev/xvda1          status code: 400, request id: 2c419989-dd53-48ef-bcb6-6e54e892d152  

Would anyone be able to share some pointers on what I'd need to do to increase the volume size in this instance and why?

Runuser Not Return Expected Results

Posted: 09 Apr 2021 06:55 PM PDT

There are two accounts in my linux computer, appuser and root. For testing purpose I wroto below code into /home/appuser/.profile

export key=value  

then as root I execute below commands separately:

 1. runuser -l appuser -c "echo key=$key"        expected: key=value      result:    key=         2. runuser -l appuser -c "/home/appuser/test.sh"        test.py includes only two lines:        #!/bin/bash      echo "key=$key"        expected: key=value      result:    key=value  

Why can't the first command return the correct result?? Is that because .profile is not sourced during execution? why?

Configuring Squid to not log TCP connections (lots of "error:transaction-end-before-headers" showing up in logs)

Posted: 09 Apr 2021 05:24 PM PDT

We run Squid proxies in GCP, and are in the process of migrating from CentOS 7 to 8. I'm working on using the GCP Internal L4 load balancer to improve redundancy/failover, and have configured a basic TCP healthcheck which is working fine.

However, it looks like Squid version 4 logs TCP connections. So, every 10 seconds I get 3 entries added to /var/log/squid/access.log:

1618013711.836      0 35.191.10.117 NONE/000 0 NONE error:transaction-end-before-headers - HIER_NONE/- -  1618013712.256      0 35.191.9.223 NONE/000 0 NONE error:transaction-end-before-headers - HIER_NONE/- -  1618013712.484      0 35.191.10.121 NONE/000 0 NONE error:transaction-end-before-headers - HIER_NONE/- -  

This would generate 25,920 lines a day of logs, which I'd like to avoid. Is there a way to configure Squid to not do this? The default squid.conf file didn't have much as far as documentation/explanation.

NSG requirement to Sync Active Directory installed on AWS EC2 with Azure Active Directory when AWS - Azure Site to site VPN is set up using openswan

Posted: 09 Apr 2021 03:57 PM PDT

Set up details:

  1. Installed openswan on VMs on AWS and Azure, configured ipsec.conf and established IPSec tunnel between Azure and AWS.
  2. Tunnel is up and I can ping openswan vpn server on other end from Azure as well as AWS.
  3. Azure active directory is configured with custom domain and verified.
  4. I Will install Active directory on windows 2019 (domain controller) with same domain name as Azure active directory "custom domain name".
  5. I will download and install AD Connect tool on AWS EC2 domain controller

Question,

  1. what should I have on NSG which is attached to Openswan VPN server subnet on Azure to allow Active directory SYNC from AWS EC2 to Active Directory on Azure.
  2. What else is required to allow/route traffic from AWS EC2 to Azure Active directory, DNS etc.

Ansible management of frequently off machines

Posted: 09 Apr 2021 03:50 PM PDT

I'd like to use Ansible to manage configuration on about a dozen Linux workstations. The problem is that, being physical machines that people use, they're frequently off. So I need a solution for ensuring that, as I change the centralized configuration, it gets not only pushed out to machines that are up, but that it eventually gets pushed machines that come up weeks from now.

The idea I've had, and I'm open to better, is to have a cron job run @reboot to SSH into the server and request the playbook run against itself.

That seems to mean I need to:

  • Create a deploy user on the server (deploy@server)
  • Create a deploy user on each workstation (deploy@ws) and give them sudo powers.
  • Add the public key for deploy@server to the authorized_keys of deploy@ws
  • Generate a new SSH key for deploy@ws, and add that newly generated SSH key to the authorized_keys of deploy@server
  • Set up a workstation cron job to run as deploy@ws to SSH into the server and run ansible-playbook with a --limit of the workstation in question. That job runs as deploy@server, who then SSHs back in as deploy@ws to actually do the configuration.

That feels more than a little convoluted. Is there some more straightforward solution for this that I'm missing?

Change language properties 7z sfx self extracting .exe with 7zSFX Builder

Posted: 09 Apr 2021 04:36 PM PDT

I use the 7zSFX Builder program to build my executable .exe from a .7z and even though I always select English in the settings (and everything else), the properties of the executable show Russian.

enter image description here

7zSFX Builder was abandoned and is no longer supported, so I will not get a response from the creator or the forum. So i tried with 7zSFX Constructor, 7-ZIP SFX Maker, and Make SFX, but they don't allow adding or editing information to the executable's properties.

Here it say using Resource hacker. But the problem with this editor (no updates since 2020) is that it does not work for files larger than 1 GB (System Error. Code: 8) and my .exe has 2 GB. So i tried with ResEdit, but this editor corrupts files of that size.

I managed to change the language only with the Resource Turner

To avoid using 3rd party programs, is there any way to fix 7zSFX Builder issue, so that I can build my .exe with the English language? Thanks

Caching freebsd-update process to upgrade using poudriere

Posted: 09 Apr 2021 03:38 PM PDT

good Afternoon, i would lilke to upgrade a freebsd server without having to go to the internet but downloading precahed files of the whole freebsd-update upgrade process from poudriere, is that posible?.

Is it possible to have one ip on main domain and a different ip on a trailing slash domain

Posted: 09 Apr 2021 01:35 PM PDT

Lets say I have a site, example.com which points to 111.11.111.11

Would it be possible for me to have example.com/landing-page on 222.22.222.22?

If so, what kind of DNS record would I have to add?

Windows Domain Group Membership Not Promulgating Fully to Workstation

Posted: 09 Apr 2021 03:56 PM PDT

Perhaps I should ask 2 or three questions but since these are all related, here they are: Situation: Multiple Windows 2019 Standard DCs synchronized. Windows 10 Pro workstations joined to the domain. We are changing Group Memberships on a DC script. Seems to work fine except for on one workstation / User.
As a practical matter, if necessary we prefer to restart the computer and logon the User in place of doing a logoff/logon of the User. For this purpose I don't see that this would matter (?).

Using: whoami.exe /groups net groups User /domain

DC replication is done as part of the change process and appears to be doing what it should. On one workstation/User, after the group membership change, the whoami command seems to "stick" at the old state. The net groups command changes to the new state. The lingering state shown with the whoami command appears to be the operative state.

So, here are the questions:

  1. When the whoami command is given, where it the data that it will use to report? (e.g. on the DC or on the workstation?)
  2. Is there more than one possible source for this information? (e.g. a cache perhaps)
  3. When the net groups command is given, where it the data that it will use to report? (e.g. on the DC or on the workstation?)
  4. Is there more than one possible source for this information? (e.g. a cache perhaps)

I've tried klist purge restart the computer and logon to User

So, a final question would be: How to force the state reported by whoami /groups to update?

GPO desktop wallpaper "sometimes" black

Posted: 09 Apr 2021 07:04 PM PDT

I've setup client desktop background via GPO Administrative Template, in user configuration. It works fine but sometimes users randomly reports they get a black wallpaper instead of the designated one. I checked the Event Viewer but couldn't find any error about that policy.

GPO screenshot

99% of the users have laptops and travel, so they not always have the domain server available for fetching the image. So I thought I'd change the policy to copy the background locally on the computer and set it from the C: drive, but the black wallpaper was happening anyway! This is still a non-issue as it happen even when users are in LAN. And I occasionally checked, the domain was correctly reachable via its DNS name.

I know it's hard to debug such a issue without a log, but I was wondering if there's a mistake or something which could lead to this result.

The only problem in the network is that some DCs has been taken offline without being correctly removed from the AD, they're still linked. But all DHCPs has been set up to use the remaining DC as primary DNS (and Google as fallback). Could this have some influence on GPO implementation?

Thanks

Application alias rewrite using apache

Posted: 09 Apr 2021 03:03 PM PDT

I am new to apache and i am trying to rewrite an application default alias to a more elegant one using apache but unfortunately i struggle and i can't find a way to do it

Can you help me please ?

What i want :

  • Server name is (for example) server0123456.com
  • my app is located at server0123456.com:8080/myapp_backend
  • A dns alias ( cname?) was created for the server : mycompany.com
  • mycompany.com/myapp_backend is working well (with RewriteRule ^(.*)$ /myapp_backend/ [R=302,L,NE]
  • But i want the url to be mycompany.com/myapp which is more elegant
  • If mycompany.com is pinged it must redirect to mycompany.com/myapp

Here is what i tried

My vh file :

<VirtualHost *:80>   ServerName mycompany.com   DocumentRoot "/hawai/composants/apache/htdocs"      <Directory />       Order deny,allow       Allow from all       Options FollowSymLinks       AllowOverride All    </Directory>      EnableMCPMReceive    RewriteEngine On        RewriteCond %{REQUEST_METHOD} !^STATUS.*$    RewriteCond %{REQUEST_METHOD} !^INFO.*$    RewriteCond %{REQUEST_METHOD} !^CONFIG.*$    RewriteCond %{REQUEST_METHOD} !^ENABLE-APP$    RewriteCond %{REQUEST_METHOD} !^DISABLE-APP$    RewriteCond %{REQUEST_METHOD} !^STOP-APP$    RewriteCond %{REQUEST_METHOD} !^STOP-APP-RSP$    RewriteCond %{REQUEST_METHOD} !^REMOVE-APP$    RewriteCond %{REQUEST_METHOD} !^STATUS-RSP$    RewriteCond %{REQUEST_METHOD} !^INFO-RSP$    RewriteCond %{REQUEST_METHOD} !^DUMP$    RewriteCond %{REQUEST_METHOD} !^DUMP-RSP$    RewriteCond %{REQUEST_METHOD} !^PING$    RewriteCond %{REQUEST_METHOD} !^PING-RSP$    RewriteCond %{REQUEST_URI} !^/test.html.*$    RewriteCond %{REQUEST_URI} !^/cgi-bin.*$    RewriteCond %{REQUEST_URI} !^/mod_cluster_manager/.*$    RewriteCond %{REQUEST_URI} !^/server-status/.*$    RewriteCond %{REQUEST_URI} !^/maintenance.html$    RewriteCond %{REQUEST_URI} !^/static_myapp/.*$    RewriteCond %{REQUEST_URI} !^/myapp_backend/.*$    RewriteRule ^(.*)$ /myapp/ [R=302,L,NE]    </VirtualHost>  

My location file :

<Location "/myapp_backend/">    Options None    AllowOverride None    Order deny,allow    Allow from all          # mod_deflate    #AddOutputFilterByType DEFLATE text/html text/plain text/xml       SetOutputFilter DEFLATE      # Netscape 4.x has some problems...      BrowserMatch ^Mozilla/4 gzip-only-text/html      # Netscape 4.06-4.08 have some more problems      BrowserMatch ^Mozilla/4\.0[678] no-gzip      # MSIE masquerades as Netscape, but it is fine      # BrowserMatch \bMSIE !no-gzip !gzip-only-text/html      # NOTE: Due to a bug in mod_setenvif up to Apache 2.0.48      # the above regex won't work. You can use the following      # workaround to get the desired effect:      BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html      # Don't compress images      SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary       SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|bz2|rar)$ no-gzip dont-vary      SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary      # Make sure proxies don't deliver the wrong content      Header append Vary User-Agent env=!dont-vary  </Location>    DeflateFilterNote Ratio ratio  DeflateCompressionLevel 7  

Do i need to add proxypass ? Thank you very much for your help

Why ubuntu 20.04 cannot support mysql 5.7.33?

Posted: 09 Apr 2021 02:43 PM PDT

Today I tried to update ubuntu from 18.04 LTS to 20.04 LTS. with do-release-uprade

After upgrade completed, I tried to start mysql with service mysql start

The result is

Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details.

And then I tried to check with systemctl status mysql.service

The result is

mysql.service - MySQL Community Server  Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)  Active: failed (Result: exit-code) since Fri 2021-04-09 17:45:14 +07; 1min 52s ago  Process: 13824 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=1/FAILURE)    Apr 09 17:45:14 systemd[1]: Failed to start MySQL Community Server.  Apr 09 17:45:14 systemd[1]: mysql.service: Scheduled restart job, restart counter is at 5.  Apr 09 17:45:14 systemd[1]: Stopped MySQL Community Server.  Apr 09 17:45:14 systemd[1]: mysql.service: Start request repeated too quickly.  Apr 09 17:45:14 systemd[1]: mysql.service: Failed with result 'exit-code'.  Apr 09 17:45:14 systemd[1]: Failed to start MySQL Community Server.  

I cannot access to mysql anymore because service cannot start.

I do not want to lost any database. I do not have any backup, too.

Is backup /var/lib/mysql is enough? (Anything else that I should backup?)

I am very confuse. What should I do next?

  1. Downgrade to Ubuntu 18.04 Or
  2. Find the way to reinstall mysql 5.7 for Ubuntu 20.04 Or
  3. Try upgrade mysql to 8. (but I cannot start mysql service this time)

Why ubuntu 20.04 cannot support mysql 5.7.33?

Thanks for any help.

Unable to ssh to Google Compute Engine

Posted: 09 Apr 2021 01:49 PM PDT

I used to log in to the instance through GCE web UI by clicking "Open in web browser".

Recently I try to log in the instance using the same way, but the window just keep showing "connecting" and didn't do anything.

I try ssh from google cloud shell. And what I get is:

USERNAME@cloudshell:~ (voltaic-phalanx-786)$ gcloud compute ssh --zone "asia-east1-c" "newforum" --project "voltaic-phalanx-786" --ssh-flag="-vvv"  OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d  10 Sep 2019  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 19: Applying options for *  debug2: resolve_canonicalize: hostname X.X.X.X is address  debug2: ssh_connect_direct  debug1: Connecting to X.X.X.X [X.X.X.X] port 22.  debug1: Connection established.  debug1: identity file /home/USERNAME/.ssh/google_compute_engine type 0  debug1: identity file /home/USERNAME/.ssh/google_compute_engine-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2  debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3  debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002  debug2: fd 3 setting O_NONBLOCK  debug1: Authenticating to X.X.X.X:22 as 'USERNAME'  debug1: using hostkeyalias: compute.xxx  debug3: hostkeys_foreach: reading file "/home/USERNAME/.ssh/google_compute_known_hosts"  debug3: record_hostkey: found key type ECDSA in file /home/USERNAME/.ssh/google_compute_known_hosts:1  debug3: load_hostkeys: loaded 1 keys from compute.xxx  debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521  debug3: send packet: type 20  debug1: SSH2_MSG_KEXINIT sent  debug3: receive packet: type 20  debug1: SSH2_MSG_KEXINIT received  debug2: local client KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-grou  p14-sha256,diffie-hellman-group14-sha1,ext-info-c  debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com  ,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com,zlib  debug2: compression stoc: none,zlib@openssh.com,zlib  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug2: peer server KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-grou  p14-sha256,diffie-hellman-group14-sha1  debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com  debug2: compression stoc: none,zlib@openssh.com  debug2: languages ctos:  debug2: languages stoc:  debug2: first_kex_follows 0  debug2: reserved 0  debug1: kex: algorithm: curve25519-sha256  debug1: kex: host key algorithm: ecdsa-sha2-nistp256  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  USERNAME@X.X.X.X: Permission denied (publickey).  ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].  

I try restart the instance. The last few lines of serial port output is:

Apr  6 14:14:45 newforum systemd[1]: Starting Google Compute Engine Startup Scripts...  Apr  6 14:14:46 newforum GCEMetadataScripts[1575]: 2021/04/06 14:14:46 GCEMetadataScripts: Starting startup scripts (version 20201217.02-0ubuntu1~18.04.0).  Apr  6 14:14:46 newforum GCEMetadataScripts[1575]: 2021/04/06 14:14:46 GCEMetadataScripts: Found startup-script in metadata.  Apr  6 14:14:46 newforum GCEMetadataScripts[1575]: 2021/04/06 14:14:46 GCEMetadataScripts: startup-script: Skipping adding existing rule  Apr  6 14:14:46 newforApr  6 14:14:46 newforum systemd[1]: Started Google Compute Engine Startup Scripts.  Apr  6 14:14:46 newforum systemd[1]: Startup finished in 6.585s (kernel) + 17.727s (userspace) = 24.313s.      Ubuntu 18.04.5 LTS newforum ttyS0    newforum login: Apr  6 14:15:14 newforum snapd[1048]: stateengine.go:150: state ensure error: Get https://api.snapcraft.io/api/v1/snaps/sections: dial tcp: lookup api.snapcraft.io on [::1]:53: read udp [::1]:60506->[::1]:53: read: connection refused  Apr  6 14:15:16 newforum snapd[1048]: daemon.go:589: gracefully waiting for running hooks  Apr  6 14:15:16 newforum snapd[1048]: daemon.go:591: done waiting for running hooks  Apr  6 14:15:16 newforum snapd[1048]: daemon stop requested to wait for socket activation  Apr  6 14:29:37 newforum systemd[1]: Starting Cleanup of Temporary Directories...  Apr  6 14:29:37 newforum systemd[1]: Started Cleanup of Temporary Directories.  

And I still can't log into the instance. How can I do?

How to create backup/snapshot of Unmanaged OS disk in azure?

Posted: 09 Apr 2021 06:07 PM PDT

I have a Virtual Machine which has an associated Unmanaged OS disk with it.

If my OS disk is of managed type it gives me option to take snapshot of the disk. But for unmanaged disk it is not giving me any option to take a snapshot.

Could anyone please tell me, how to take backup of my Unmanaged OS disk?

Why 2 storage account gets created when i have an unmanaged disk associated with my VM?

How to create docker ingress network with ipv6 support

Posted: 09 Apr 2021 05:46 PM PDT

I am trying to figure out issue with my docker network setup, (docker containers give out blank ipv6 address)

I am not able to reach the service over ipv6 on localhost, I have to use curl -4 http://localhost:8080 instead of curl http://localhost:8080

While investigation I found out that the docker ingress network does not have ipv6 enabled, so I created removed older one and created new ingress network with ipv6 address

sunils@sunils-pc ~ $ docker network inspect ingress  [      {          "Name": "ingress",          "Id": "8sn7034q646ayadix9nmsmv50",          "Created": "2018-09-29T04:42:10.857389865Z",          "Scope": "swarm",          "Driver": "overlay",          "EnableIPv6": true,          "IPAM": {              "Driver": "default",              "Options": null,              "Config": [                  {                      "Subnet": "172.11.0.0/24",                      "Gateway": "172.11.0.1"                  },                  {                      "Subnet": "2002:ac0b:0000::/48",                      "Gateway": "2002:ac0b::1"                  }              ]          },          "Internal": false,          "Attachable": false,          "Ingress": true,          "ConfigFrom": {              "Network": ""          },          "ConfigOnly": false,          "Containers": null,          "Options": {              "com.docker.network.driver.overlay.vxlanid_list": "4106,4107"          },          "Labels": null      }  ]  

Also created new overlay network ipv6_overlay that I would be using from my containers,

sunils@sunils-pc ~ $ docker network inspect ipv6_overlay  [      {          "Name": "ipv6_overlay",          "Id": "n7fv85sqhm0wd1ekpo8evnit2",          "Created": "2018-09-29T06:47:17.363996665Z",          "Scope": "swarm",          "Driver": "overlay",          "EnableIPv6": true,          "IPAM": {              "Driver": "default",              "Options": null,              "Config": [                  {                      "Subnet": "172.10.0.0/24",                      "Gateway": "172.10.0.1"                  },                  {                      "Subnet": "2002:ac0a:0000::/48",                      "Gateway": "2002:ac0a::1"                  }              ]          },          "Internal": false,          "Attachable": false,          "Ingress": false,          "ConfigFrom": {              "Network": ""          },          "ConfigOnly": false,          "Containers": null,          "Options": {              "com.docker.network.driver.overlay.vxlanid_list": "4113,4114"          },          "Labels": null      }  ]  

I configured my container to use the overlay network ipv6_overlay with predefined ip addresses,

sunils@sunils-pc /mnt/share/sunils/repos/github/ec2-sample-docker $ cat docker-compose.yml   version: '3.2'    services:      sessions:          image: redis:4          ports:               - 6379:6379          networks:             web:                 ipv4_address: 172.10.0.10                 ipv6_address: 2002:ac0a:0000::10                 aliases:                     - redis      cowsay-service:          image: spsarolkar/cowsay           ports:                  - 8000          environment:              - SERVICE_PORTS=8000          deploy:              replicas: 5              restart_policy:                  condition: on-failure                  max_attempts: 3                  window: 120s          networks:                  web:                      ipv4_address: 172.10.0.9                      ipv6_address: 2002:ac0a:0000::9          cowsay-proxy:          image: dockercloud/haproxy          depends_on:              - cowsay-service          environment:               - BALANCE=leastconn          volumes:              - /var/run/docker.sock:/var/run/docker.sock          ports:                  - "8000:80"          networks:              web:                 ipv4_address: 172.10.0.8                 ipv6_address: 2002:ac0a:0000::8                 aliases:                      - cowsay        cowsay-ui:          image: spsarolkar/cowsay-ui          ports:                  - "[::1]:8080:8080"          depends_on:              - redis               - cowsay          networks:              web:                 ipv4_address: 172.10.0.7                 ipv6_address: 2002:ac0a:0000::7    networks:          web:                  external:                          name: ipv6_overlay  

But when I start my swarm services I get error in /var/log/docker.log as,

time="2018-09-29T12:09:19.693307864+05:30" level=error msg="fatal task error" error="Invalid address 2002:ac0b::2: It does not belong to any of this network's subnets" module=node/agent/taskmanager node.id=luqw5to6dike43h88h25xj7tg service.id=tfttw36jqmsq3ew6wzn61gyku task.id=7jo89apxj585pdtacmr2d7jpe  

I am not sure even when I specified my own overlay network in the docker compose its falling back to ingress network. I get same error when I do not specify any ip addresses.

Can someone please help me with this

How to change default of max open files per process?

Posted: 09 Apr 2021 07:03 PM PDT

I changed the max open files to 20000. However I'm still running into limits and I found that there is a per process limit. I would like to know how to change this the default per process limit too?

ubuntu@ip-172-16-137-139:~$ cat /proc/1237/limits  Limit                     Soft Limit           Hard Limit           Units       Max cpu time              unlimited            unlimited            seconds     Max file size             unlimited            unlimited            bytes       Max data size             unlimited            unlimited            bytes       Max stack size            8388608              unlimited            bytes       Max core file size        0                    unlimited            bytes       Max resident set          unlimited            unlimited            bytes       Max processes             31538                31538                processes   Max open files            1024                 4096                 files  Max locked memory         65536                65536                bytes       Max address space         unlimited            unlimited            bytes       Max file locks            unlimited            unlimited            locks       Max pending signals       31538                31538                signals     Max msgqueue size         819200               819200               bytes       Max nice priority         0                    0                      Max realtime priority     0                    0                      Max realtime timeout      unlimited            unlimited            us            ubuntu@ip-172-16-137-139:~$ ulimit -a  core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 31538  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 20000               <- changed this  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 8192  cpu time               (seconds, -t) unlimited  max user processes              (-u) 31538  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

How can I have fail2ban log to an external syslog server?

Posted: 09 Apr 2021 05:07 PM PDT

I'm running an Ubuntu 16.04 web server (running Webmin). I also have Graylog running on a separate server on my LAN.

I'd like to have fail2ban log itself to /var/log/fail2ban.log as well as my external syslog server, but I'm not sure how to do this.

In the fail2ban config file, there's an option to change logtarget=/var/log/fail2ban.log to logtarget = SYSLOG, but I'm not sure how to get those outputs over to the other server.

Terraform + VMWare - can't provision a server

Posted: 09 Apr 2021 09:05 PM PDT

I'm trying to do some really basic stuff with Terraform on VMWare. I'm pretty sure I must be doing something obvious wrong as this is a really simple use case.

I have a test.tf file like this:

# Configure the VMware vSphere Provider  provider "vsphere" {    user           = "${var.vsphere_user}"    password       = "${var.vsphere_password}"    vsphere_server = "${var.vsphere_server}"      # if you have a self-signed cert    allow_unverified_ssl = true  }    # Create a folder  resource "vsphere_folder" "test_folder" {    path = "test_folder"    datacenter = "Datacenter"  }      # Create a virtual machine within the folder  resource "vsphere_virtual_machine" "web" {    datacenter = "Datacenter"    name   = "terraform-web"    folder = "${vsphere_folder.test_folder.path}"    vcpu   = 2    memory = 4096      network_interface {      label = "VM Network"    }      disk {      datastore = "datastore1"      template = "my-template/my-template.vmdk"    }  }  

I have a variable file with the user, password and VMware server in.

When I run terraform plan, it executes cleanly.

When I run terraform apply, I get:

* vsphere_virtual_machine.web: vm 'my-template/my-template.vmdk' not found  

I've tried leaving off the my-template.vmdk off (so pointing at the template directory) I've tried pointing it at the vmx file

What should I be putting for the disk location? Does anyone have a working example please?

Postgres stopping on Windows

Posted: 09 Apr 2021 04:06 PM PDT

I have a Postgres 9.2 running on Windows 7 as a service. Every day the database seems to stop and sometimes it start to run again alone and sometimes I need to restart the service. There is no much log information, I dont have a antivirus, firewall installed on this computer. I tried to improve the log information but it didnt helped much.

I could not find the same problem on the internet or here on serverfault.

There is some log information:

2016-09-08 06:25:27 BRT [unknown] LOG: XX000: could not receive data

from client: unrecognized winsock error 10061  2016-09-08 06:25:27 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 06:25:27 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 06:25:27 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 06:55:39 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 06:55:39 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 06:55:39 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 06:55:39 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 07:15:25 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 07:15:25 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 07:15:25 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 07:15:25 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 10:59:20 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 10:59:20 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 10:59:20 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 10:59:20 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 11:14:55 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 11:14:55 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 11:14:55 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 11:14:55 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 11:16:08 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 11:16:08 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  2016-09-08 11:16:08 BRT [unknown] LOG:  XX000: could not receive data from client: unrecognized winsock error 10061  2016-09-08 11:16:08 BRT [unknown] LOCATION:  pq_recvbuf, src\backend\libpq\pqcomm.c:831  

If I leave PgAdmin III open with the server status windows I get theses messages too.

Any tips?

Thank you

Windows Server 2012 R2 sometimes fails to boot (or shut down?) - stuck at spinner

Posted: 09 Apr 2021 08:07 PM PDT

Our Windows Server 2012 R2 VMware virtual machines have scheduled tasks defined to reboot them weekly using shutdown.exe /r and some of these servers fail to restart some of the time. When I connect to the server via VMRC I see a screen like this:

boot (or shutdown?) screen

I have to power cycle the server to get it to work again. The System event log doesn't contain any errors. The last messages before shutdown are:

The IKE and AuthIP IPsec Keying Modules service entered the stopped state.

The kernel power manager has initiated a shutdown transition.

After power-cycling there is a message like this:

The last shutdown's success status was false. The last boot's success status was false.

I tried enabling boot logging, but there are no logs at all for the failed boot in %SystemRoot%\ntbtlog.txt. So either the problem occurs before the first log entry is written or it's actually failing to shut down.

If I manually reboot the server, including by running shutdown /r from a command prompt this works. What else can I do to troubleshoot this?

Nginx request_time slower with HTTP/2

Posted: 09 Apr 2021 04:06 PM PDT

We are running nginx/1.9.10 as a frontend server with multiple application server as upstream. We are using plain http, mostly https and switched to http/2 in the last weak.

We are logging like this:

 log_format custom '$host $server_port $request_time '                     '$upstream_response_time $remote_addr '                     '"$http2" $upstream_addr $time_iso8601 '                     '"$request" $status $body_bytes_sent '                     '"$http_referer" "$http_user_agent"';  

We suddenly see a larger difference between $request_time and $upstream_response_time . A difference here is quite natural as the $request_time depends on the users network while the upstream_response_time does not.

So usually you should not care too much about $request_time as long as $upstream_response_time is stable.

But I still wanted to check what is happening because it strted to getting worse with http/2 So I compared average response times for https/1.1 and https/2.0

First I cecked all http/1.1 request and calculated average response_time and average upstream_time:

grep ' 443 ' access.log|grep 'HTTP/1.1'|\   cut -d ' ' -f 3,4 | awk '{r+=$1; u+=$2} END {print r/NR; print u/NR}'    0.0139158  # average response time for https/1.1  0.00691421 # average upstream time for https/1.1  

Now I did the same with https/2.0:

grep ' 443 ' access.log|grep 'HTTP/2.0'| \  cut -d ' ' -f 3,4 | awk '{r+=$1; u+=$2} END {print r/NR; print u/NR}'  0.0828755  # average response time for https/1.1  0.00606643 # average upstream time for https/2.0  

As you see the upstream time is nearly identical but the request time is slower for http/2 by factor 7! Wow! Isn't http/2 expected to be faster?

Now I checked all request which have a huge difference between these two values and nearly all of the top 500 has been a status code of 302

grep ' 443 ' access.log | grep 'HTTP/1.1' | \  awk '{ if ( $3 != $4 && $4 != "-" ) { \    gsub(/\./,"",$3);gsub(/\./,"",$4); \    print $3-$4,$4,$6,$9,$11,$12 }}' | \  sort -n | tail -n 10000 | grep 'POST HTTP/1.1" 302' | wc -l    9008   # of 10000 (or 90%) request ordered by difference between   # response and request time have status code 302    

So 90% of all requests with the highest difference between response and upstream time are status code 302. This is strange

On http/2 it is even worse:

grep ' 443 ' access.log | grep 'HTTP/2.0' | \  awk '{ if ( $3 != $4 && $4 != "-" ) { \    gsub(/\./,"",$3);gsub(/\./,"",$4); \    print $3-$4,$4,$6,$9,$11,$12 }}' | \  sort -n | tail -n 10000 | grep 'POST HTTP/2.0" 302' | wc -l    9790  # of 10000 (or 98%) request ordered by difference between   # response and request time have status code 302    

So here are 98% of these request are 302 status.

Why does http/2 appear to be slower than http/1.1? Why are so many 302 status codes involved in requests were upstream and response time differ extremly (in HTTP/1.1 and HTTP/2.0)?

Apache: A lot of connections gracefully finishing?

Posted: 09 Apr 2021 02:07 PM PDT

at the moment I'm mentioning that a lot of connections are gracefully finishing if I visit my apache2 status-page.

What does that mean and why are they gracefully finishing? Some of them are in this state for 5-10min.

The server uses Apache 2.4 with event-worker and PHP-FPM as PHP-handler.

Gracefully finishing

Can Samba4 create files with a GID other than AD's "Primary Group"?

Posted: 09 Apr 2021 05:07 PM PDT

I have a Windows 2012 R2 AD. Users all are members of the group "domain users", and then each user is also a member of another group, depending on department. All users have "domain users" as their "primary group" in AD. So it looks somethin like this:

uid=40001(user1) gid=123456(domain users) groups=123456(domain users),50001(dep1),301(BUILTIN\users)  uid=40002(user2) gid=123456(domain users) groups=123456(domain users),50002(dep2),301(BUILTIN\users)  uid=40003(user3) gid=123456(domain users) groups=123456(domain users),50002(dep2),301(BUILTIN\users)  

Now, on the Samba share, their home directories shall be owned by user:dep. I create the directories, chown to the appropriate group and set the guid to be sticky:

drwxrws---+ 4 user1   dep2 41 Jan 13 17:30 user1  drwxrws---+ 3 user2   dep2 22 Jan 13 17:30 user2  drwxrws---+ 3 user3   dep2 22 Jan 13 17:30 user3  

in the /etc/samba/smb.conf I have configured the home directories as follows:

[homes]          comment = Home Directories          browseable = no          writable = yes          directory mask=2770          force directory mode=2770          create mask=2770          force create mode=2770          force security mode=2770          force directory security mode=2770  

But whenever I mount the share in windows and create a file or directory inside, it will be owned by the default group:

-rwxrwx---+ 1 user1   domain users  0 Jan 14 13:16 Test.txt  

How can I get samba to use the depX groups for new files and directories?

Make SSH_ORIGINAL_COMMAND available in AuthorizedKeysCommand context

Posted: 09 Apr 2021 09:05 PM PDT

Using SSH_ORIGINAL_COMMAND in AuthorizedKeys is so helpful, I'd like to know how to access it in the AuthorizedKeysCommand context (via env ?). Is this possible ? can anybody give me advice on going into this ?

If possible, I'll use this SSH_ORIGINAL_COMMAND to send client specifics information to the AuthorizedKeysCommand script. Currently, the only alternative to this is to use the login itself (we have around 30k+ different 'hosts' that might want to connect to our servers) to identify the client, but that come with the need of a custom nss endpoint configuration (we use libnss-pgsql2 ) to support dynamic user lookup, and more work (to manage uid & co)

Using 'one' standard user file was enough (and a lot simplier), as those hosts don't need to do anything but to setup a revert port forwarding rule, and are bound to a very limited shell.

My (now useless) /home/host_controler/.ssh/authorized_keys file (build on a cron run) was like : command="limited_shell.sh --host_id=XXX1 $SSH_ORIGINAL_COMMAND" ssh-rsa pubkey of host 1" command="limited_shell.sh --host_id=XXX2 $SSH_ORIGINAL_COMMAND" ssh-rsa pubkey of host 2" command="limited_shell.sh --host_id=XXX3 $SSH_ORIGINAL_COMMAND" ssh-rsa pubkey of host 3" ...

Thank you very much for your help !

SSL connection to Thin through vagrant

Posted: 09 Apr 2021 08:07 PM PDT

I'm running a Rails app that forces SSL and uses HTTP basic authentication inside of a vagrant VM. If I try to make a curl request to the app from my host machine I get

curl -k --verbose https://[user]:[password]@localhost:3001/  * About to connect() to localhost port 3001 (#0)  *   Trying ::1...  * Connection refused  *   Trying fe80::1...  * Connection refused  *   Trying 127.0.0.1...  * connected  * Connected to localhost (127.0.0.1) port 3001 (#0)  * SSLv3, TLS handshake, Client hello (1):  * Unknown SSL protocol error in connection to localhost:3001   * Closing connection #0  curl: (35) Unknown SSL protocol error in connection to localhost:3001   

If I make the same curl request from inside the VM it works

curl -k --verbose https://[user]:[password]@localhost:3001/  * About to connect() to localhost port 3001 (#0)  *  Trying 127.0.0.1...  * Connected to localhost (127.0.0.1) port 3001 (#0)  * successfully set certificate verify locations:  *  CAfile: none    CApath: /etc/ssl/certs  * SSLv3, TLS handshake, Client hello (1):  * SSLv3, TLS handshake, Server hello (2):  * SSLv3, TLS handshake, CERT (11):  * SSLv3, TLS handshake, Server finished (14):  * SSLv3, TLS handshake, Client key exchange (16):  * SSLv3, TLS change cipher, Client hello (1):  * SSLv3, TLS handshake, Finished (20):  * SSLv3, TLS change cipher, Client hello (1):  * SSLv3, TLS handshake, Finished (20):  * SSL connection using AES256-SHA  * Server certificate:  * subject: C=AU; ST=Some-State; O=Internet Widgits Pty Ltd  * start date: 2013-10-15 19:38:54 GMT  * expire date: 2023-10-13 19:38:54 GMT  * issuer: C=AU; ST=Some-State; O=Internet Widgits Pty Ltd  * SSL certificate verify result: self signed certificate (18), continuing anyway.  * Server auth using Basic with user 'admin'  > GET / HTTP/1.1  > Authorization: Basic [stuff]  > User-Agent: curl/7.29.0  > Host: localhost:3001  > Accept: */*  >   < HTTP/1.1 200 OK  < Strict-Transport-Security: max-age=31536000  < X-Frame-Options: SAMEORIGIN  < X-XSS-Protection: 1; mode=block  < X-Content-Type-Options: nosniff  < X-UA-Compatible: chrome=1  < Content-Type: application/json; charset=utf-8  < ETag: "80961ae530b068989bbd4463b2fb6308"  < Cache-Control: max-age=0, private, must-revalidate  < Set-Cookie: request_method=GET; path=/; secure  < X-Request-Id: 9d0426ec-eca2-469a-9e36-14c79d69596a  < X-Runtime: 0.894703  < Connection: close  < Server: thin 1.5.1 codename Straight Razor  <   * Closing connection 0  * SSLv3, TLS alert, Client hello (1):  [page]  

I'm starting thin in the VM using a self signed certificate I created and added to my mac keychain

bundle exec thin start --ssl --ssl-key-file /etc/ssl/server.key --ssl-cert-file /etc/ssl/server.crt -p 3001  

I see that the problem is "Unkown SSL protocol error" but nothing I've found online has been helpful. From the output you can see that they're both using SSLv3. The key isn't expired. I've got nothing, help.

Nginx: How do I forward an HTTP request to another port?

Posted: 09 Apr 2021 03:26 PM PDT

What I want to do is: When someone visits http://localhost/route/abc the server responds exactly the same as http://localhost:9000/abc

Now I configure my Nginx server like this:

location /route {      proxy_pass  http://127.0.0.1:9000;  }  

The HTTP request is dispatched to port 9000 correctly, but the path it receives is http://localhost:9000/route/abc not http://localhost:9000/abc.

Any suggestions?

Exception Enumerating SQL Server Instances with SMO WMI ManagedComputer

Posted: 09 Apr 2021 07:03 PM PDT

I'm trying to use the SMO WMI API/objects in PowerShell 2.0 on Windows 7 with SQL Server 2008 R2 installed to get a list of SQL Server instances on the local computer using the Managed Comuter object. However, I'm getting exceptions after I instantiate the objects when I try to access any data on them. I'm running PowerShell as an administrator.

$computer = New-Object Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer ($env:computername)  $computer.ServerInstances  

Results in this error:

The following exception was thrown when trying to enumerate the collection: "An exception occurred in SMO while trying to manage a service.".
At line:1 char:89
+ (New-Object Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer ($env:computername)). <<<< ServerInstances
+ CategoryInfo : NotSpecified: (:) [], ExtendedTypeSystemException
+ FullyQualifiedErrorId : ExceptionInGetEnumerator

Is there some service I have to enable to get this to work? The WMI service is running. Is there some other setting I need? Why can't I enumerate SQL Server instances?

ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

Posted: 09 Apr 2021 06:07 PM PDT

I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server.

The error I get in my FTP Client is:

Status: Retrieving directory listing...  Command:    CDUP  Response:   250 CDUP command successful  Command:    PWD  Response:   257 "/var" is the current directory  Command:    PASV  Response:   227 Entering Passive Mode (172,16,4,22,237,205).  Command:    MLSD  Response:   550 Access is denied.  Error:  Failed to retrieve directory listing  

Any idea? Here is the config of my proftpd:

#  # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file.  # To really apply changes, reload proftpd after modifications, if  # it runs in daemon mode. It is not required in inetd/xinetd mode.  #     # Includes DSO modules  Include /etc/proftpd/modules.conf    # Set off to disable IPv6 support which is annoying on IPv4 only boxes.  UseIPv6 off  # If set on you can experience a longer connection delay in many cases.  IdentLookups off    ServerName "Drupal Intranet"  ServerType standalone  ServerIdent on "FTP Server ready"  DeferWelcome on    # Set the user and group that the server runs as  User nobody  Group nogroup    MultilineRFC2228 on  DefaultServer on  ShowSymlinks            on    TimeoutNoTransfer 600  TimeoutStalled 600  TimeoutIdle 1200    DisplayLogin welcome.msg  DisplayChdir .message true  ListOptions "-l"    DenyFilter \*.*/    # Use this to jail all users in their homes   # DefaultRoot           ~    # Users require a valid shell listed in /etc/shells to login.  # Use this directive to release that constrain.  # RequireValidShell     off    # Port 21 is the standard FTP port.  Port 21    # In some cases you have to specify passive ports range to by-pass  # firewall limitations. Ephemeral ports can be used for that, but  # feel free to use a more narrow range.  # PassivePorts                  49152 65534    # If your host was NATted, this option is useful in order to  # allow passive tranfers to work. You have to use your public  # address and opening the passive ports used on your firewall as well.  # MasqueradeAddress     1.2.3.4    # This is useful for masquerading address with dynamic IPs:  # refresh any configured MasqueradeAddress directives every 8 hours  <IfModule mod_dynmasq.c>  # DynMasqRefresh 28800  </IfModule>    # To prevent DoS attacks, set the maximum number of child processes  # to 30.  If you need to allow more than 30 concurrent connections  # at once, simply increase this value.  Note that this ONLY works  # in standalone mode, in inetd mode you should use an inetd server  # that allows you to limit maximum number of processes per service  # (such as xinetd)  MaxInstances 30    # Set the user and group that the server normally runs at.    # Umask 022 is a good standard umask to prevent new files and dirs  # (second parm) from being group and world writable.  Umask               022  022  # Normally, we want files to be overwriteable.  AllowOverwrite          on    # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords:  # PersistentPasswd      off    # This is required to use both PAM-based authentication and local passwords  AuthPAMConfig proftpd  AuthOrder mod_auth_pam.c* mod_auth_unix.c    # Be warned: use of this directive impacts CPU average load!  # Uncomment this if you like to see progress and transfer rate with ftpwho  # in downloads. That is not needed for uploads rates.  #    UseSendFile off    TransferLog /var/log/proftpd/xferlog  SystemLog /var/log/proftpd/proftpd.log    # Logging onto /var/log/lastlog is enabled but set to off by default  #UseLastlog on    # In order to keep log file dates consistent after chroot, use timezone info  # from /etc/localtime.  If this is not set, and proftpd is configured to  # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight  # savings timezone regardless of whether DST is in effect.  #SetEnv TZ :/etc/localtime    <IfModule mod_quotatab.c>  QuotaEngine off  </IfModule>    <IfModule mod_ratio.c>  Ratios off  </IfModule>      # Delay engine reduces impact of the so-called Timing Attack described in  # http://www.securityfocus.com/bid/11430/discuss  # It is on by default.   <IfModule mod_delay.c>  DelayEngine on  </IfModule>    <IfModule mod_ctrls.c>  ControlsEngine        off  ControlsMaxClients    2  ControlsLog           /var/log/proftpd/controls.log  ControlsInterval      5  ControlsSocket        /var/run/proftpd/proftpd.sock  </IfModule>    <IfModule mod_ctrls_admin.c>  AdminControlsEngine off  </IfModule>    #  # Alternative authentication frameworks  #  #Include /etc/proftpd/ldap.conf  #Include /etc/proftpd/sql.conf    #  # This is used for FTPS connections  #  #Include /etc/proftpd/tls.conf    #  # Useful to keep VirtualHost/VirtualRoot directives separated  #  #Include /etc/proftpd/virtuals.con    # A basic anonymous configuration, no upload directories.    # <Anonymous ~ftp>  #   User                ftp  #   Group               nogroup  #   # We want clients to be able to login with "anonymous" as well as "ftp"  #   UserAlias           anonymous ftp  #   # Cosmetic changes, all files belongs to ftp user  #   DirFakeUser on ftp  #   DirFakeGroup on ftp  #   #   RequireValidShell       off  #   #   # Limit the maximum number of anonymous logins  #   MaxClients          10  #   #   # We want 'welcome.msg' displayed at login, and '.message' displayed  #   # in each newly chdired directory.  #   DisplayLogin            welcome.msg  #   DisplayChdir        .message  #   #   # Limit WRITE everywhere in the anonymous chroot  #   <Directory *>  #     <Limit WRITE>  #       DenyAll  #     </Limit>  #   </Directory>  #   #   # Uncomment this if you're brave.  #   # <Directory incoming>  #   #   # Umask 022 is a good standard umask to prevent new files and dirs  #   #   # (second parm) from being group and world writable.  #   #   Umask               022  022  #   #            <Limit READ WRITE>  #   #            DenyAll  #   #            </Limit>  #   #            <Limit STOR>  #   #            AllowAll  #   #            </Limit>  #   # </Directory>  #   # </Anonymous>    # Include other custom configuration files  Include /etc/proftpd/conf.d/  UseReverseDNS off  <Global>  RootLogin on  UseFtpUsers on  ServerIdent on  DefaultChdir /var/www  DeleteAbortedStores on  LoginPasswordPrompt on  AccessGrantMsg "You have been authenticated successfully."  </Global>  

Any idea what could be wrong?

tproxy squid bridge very slow when cache is full

Posted: 09 Apr 2021 02:07 PM PDT

I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s.

When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable.

Any hints of what may be happening?

I'm using:

  • linux-2.6.30.10
  • iptables-1.4.3.2
  • squid-3.1.1

compiled with these options:

./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish

My squid.conf:

cache_mem 100 MB  memory_pools off    acl manager proto cache_object  acl localhost src 127.0.0.1/32  acl localhost src ::1/128  acl to_localhost dst 127.0.0.0/8 0.0.0.0/32  acl to_localhost dst ::1/128    acl localnet src 10.0.0.0/8     # RFC1918 possible internal network acl  localnet src 172.16.0.0/12  # RFC1918 possible internal network acl  localnet src 192.168.0.0/16 # RFC1918 possible internal network acl  localnet src fc00::/7   # RFC 4193 local private network range acl  localnet src fe80::/10  # RFC 4291 link-local (directly plugged) machines    acl net-g1 src xxx.xxx.xxx.xxx/24    acl SSL_ports port 443  acl Safe_ports port 80          # http  acl Safe_ports port 21          # ftp  acl Safe_ports port 443         # https  acl Safe_ports port 70          # gopher  acl Safe_ports port 210         # wais  acl Safe_ports port 1025-65535  # unregistered ports  acl Safe_ports port 280         # http-mgmt  acl Safe_ports port 488         # gss-http  acl Safe_ports port 591         # filemaker  acl Safe_ports port 777         # multiling http  acl CONNECT method CONNECT    http_access allow manager localhost  http_access deny manager    http_access deny !Safe_ports    http_access deny CONNECT !SSL_ports    http_access allow net-g1    #from where browsing should be allowed  http_access allow localnet  http_access allow localhost    http_access deny all    http_port 3128  http_port 3129 tproxy    hierarchy_stoplist cgi-bin ?    cache_dir ufs /var/spool/squid 8000 16 256    access_log none    cache_log /var/log/squid/cache.log    coredump_dir /var/spool/squid    refresh_pattern ^ftp:           1440    20%     10080  refresh_pattern ^gopher:        1440    0%      1440  refresh_pattern -i (/cgi-bin/|\?) 0     0%      0  refresh_pattern .  

I have this issue when the cache is full, but do not really know if it is because of that.

Thanks in advance and sorry my english.

Are networks now faster than disks?

Posted: 09 Apr 2021 01:51 PM PDT

This is a software design question

I used to work on the following rule for speed

cache memory > memory > disk > network  

With each step being 5-10 times the previous step (e.g. cache memory is 10 times faster than main memory).

Now, it seems that gigabit ethernet has latency less than local disk. So, maybe operations to read out of a large remote in-memory DB are faster than local disk reads. This feels like heresy to an old timer like me. (I just spent some time building a local cache on disk to avoid having to do network round trips - hence my question)

Does anybody have any experience / numbers / advice in this area?

And yes I know that the only real way to find out is to build and measure, but I was wondering about the general rule.

edit:

This is the interesting data from the top answer:

  • Round trip within same datacenter 500,000 ns

  • Disk seek 10,000,000 ns

This is a shock for me; my mental model is that a network round trip is inherently slow. And its not - its 10x faster than a disk 'round trip'.

Jeff attwood posted this v good blog on the topic http://blog.codinghorror.com/the-infinite-space-between-words/

No comments:

Post a Comment