Friday, April 1, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Errors mounting Windows share (cifs) with pam_mount

Posted: 31 Mar 2022 11:02 PM PDT

I have an Ubuntu 21.10 pc joined to a Samba AD domain controller. Everything is working absolutely fine - Kerberos is working (can get tickets with kinit), winbind is working (can get info abount users and groups), I am able to log in to the system with domain credentials.

And mounting shares manually also works, both with Kerberos and ntlmssp authorization:

sudo mount -t cifs //server/path /mount/point -o username=USER,domain=DOMAIN,sec=ntlmssp  
sudo mount -t cifs //server/path /mount/point -o username=USER,domain=DOMAIN,sec=krb5  

Setting username like username=USER@DOMAIN works too.

The problem is I can't get pam_mount to work when a user logs in via gnome!

Using krb5 in pam_mount.conf.xml like this

<volume        fstype="cifs"        server="server"        path="path"        mountpoint="mount/point"        options="sec=krb5"    />  

Gives an error in auth.conf

(mount.c:72): mount error(126): Required key not available  

Using ntlmssp in pam_mount.conf.xml like this

<volume        fstype="cifs"        server="server"        path="path"        mountpoint="mount/point"        options="sec=ntlmssp"    />  

Gives a different error in auth.conf

(pam_mount.c:173): conv->conv(...): Conversation error   

After enabling debugging in pam_mount I can also see the exact mount command it is executing in auth.log and it is identical to the ones above which work, when I run them manually.

I've tried the following:

  • played with mount options in different combinations: vers=3.0, _netdev,user,sec
  • forced Kerberos to store tickets in files in /tmp/krb5cc_%u with pam_winbind config
  • read a ton of forums

Any ideas?

Reduce ansible task boilerplate with some kind of template?

Posted: 31 Mar 2022 11:10 PM PDT

I'm looking for ways to reduce the amount of boiler plate config I have to put into some of my ansible tasks.

For instance I have many tasks using the docker_container module, and each one has the same ~10 identical options set. I'd like to have these standard options defined somewhere centrally, and each task simply defines only the unique options it needs.

(The problem researching this is that 99.9% of search results on this subject are about the copy/template module itself).

I guess I could write a custom module in python which extends the docker_container module, but that seems really overkill.

Any ideas on reducing boilerplate config?

Docker isn't fowarding port to redis

Posted: 31 Mar 2022 11:16 PM PDT

I'm trying to run redis in a docker container on AmazonLinux, and I can't for the life of me get it to forward the port. It starts as it should and appears to be working, but there is no process listening on 6379 on the host box, as one would expect. What should I do?

Here is uname -a:

Linux <host name omitted>.internal 5.4.176-91.338.amzn2.x86_64 #1 SMP <start time omitted> x86_64 x86_64 x86_64 GNU/Linux  

Output of docker run -p6379:6379 redis:

1:C 01 Apr 2022 06:09:18.018 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo  1:C 01 Apr 2022 06:09:18.018 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started  1:C 01 Apr 2022 06:09:18.018 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf  1:M 01 Apr 2022 06:09:18.019 * monotonic clock: POSIX clock_gettime  1:M 01 Apr 2022 06:09:18.020 * Running mode=standalone, port=6379.  1:M 01 Apr 2022 06:09:18.020 # Server initialized  1:M 01 Apr 2022 06:09:18.020 * Ready to accept connections  

Logs from sudo dockerd -D daemon:

DEBU[2022-04-01T06:09:17.711242117Z] Calling HEAD /_ping  DEBU[2022-04-01T06:09:17.712201263Z] Calling POST /v1.40/containers/create  DEBU[2022-04-01T06:09:17.712439060Z] form data: {"AttachStderr":true,"AttachStdin":false,"AttachStdout":true,"Cmd":null,"Domainname":"","Entrypoint":null,"Env":[],"ExposedPorts":{"6379/tcp":{}},"HostConfig":{"AutoRemove":false,"Binds":null,"BlkioDeviceReadBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceWriteIOps":null,"BlkioWeight":0,"BlkioWeightDevice":[],"CapAdd":null,"CapDrop":null,"Capabilities":null,"Cgroup":"","CgroupParent":"","ConsoleSize":[0,0],"ContainerIDFile":"","CpuCount":0,"CpuPercent":0,"CpuPeriod":0,"CpuQuota":0,"CpuRealtimePeriod":0,"CpuRealtimeRuntime":0,"CpuShares":0,"CpusetCpus":"","CpusetMems":"","DeviceCgroupRules":null,"DeviceRequests":null,"Devices":[],"Dns":[],"DnsOptions":[],"DnsSearch":[],"ExtraHosts":null,"GroupAdd":null,"IOMaximumBandwidth":0,"IOMaximumIOps":0,"IpcMode":"","Isolation":"","KernelMemory":0,"KernelMemoryTCP":0,"Links":null,"LogConfig":{"Config":{},"Type":""},"MaskedPaths":null,"Memory":0,"MemoryReservation":0,"MemorySwap":0,"MemorySwappiness":-1,"NanoCpus":0,"NetworkMode":"default","OomKillDisable":false,"OomScoreAdj":0,"PidMode":"","PidsLimit":0,"PortBindings":{"6379/tcp":[{"HostIp":"","HostPort":"6379"}]},"Privileged":false,"PublishAllPorts":false,"ReadonlyPaths":null,"ReadonlyRootfs":false,"RestartPolicy":{"MaximumRetryCount":0,"Name":"no"},"SecurityOpt":null,"ShmSize":0,"UTSMode":"","Ulimits":null,"UsernsMode":"","VolumeDriver":"","VolumesFrom":null},"Hostname":"","Image":"redis","Labels":{},"NetworkingConfig":{"EndpointsConfig":{}},"OnBuild":null,"OpenStdin":false,"StdinOnce":false,"Tty":false,"User":"","Volumes":{},"WorkingDir":""}  DEBU[2022-04-01T06:09:17.726732730Z] container mounted via layerStore: &{/var/lib/docker/overlay2/e99c57f3e0a991850fabb2b3dae7a66963bee38235d99b4792e24dc018dd0b0a/merged 0x5590d6c33c40 0x5590d6c33c40}  DEBU[2022-04-01T06:09:17.726817190Z] Probing all drivers for volume with name: 6e4cc5e5d43a1e7d3ca4144de24bd9c00733a78083fbba4fe2245a75b3d56440  DEBU[2022-04-01T06:09:17.727783987Z] Registering new volume reference: driver "local", name "6e4cc5e5d43a1e7d3ca4144de24bd9c00733a78083fbba4fe2245a75b3d56440"  DEBU[2022-04-01T06:09:17.729422536Z] copying image data from b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d:/data, to 6e4cc5e5d43a1e7d3ca4144de24bd9c00733a78083fbba4fe2245a75b3d56440  DEBU[2022-04-01T06:09:17.738464370Z] Calling POST /v1.40/containers/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d/attach?stderr=1&stdout=1&stream=1  DEBU[2022-04-01T06:09:17.738575127Z] attach: stdout: begin  DEBU[2022-04-01T06:09:17.738611364Z] attach: stderr: begin  DEBU[2022-04-01T06:09:17.738904225Z] Calling POST /v1.40/containers/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d/wait?condition=next-exit  DEBU[2022-04-01T06:09:17.739341603Z] Calling POST /v1.40/containers/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d/start  DEBU[2022-04-01T06:09:17.740139082Z] container mounted via layerStore: &{/var/lib/docker/overlay2/e99c57f3e0a991850fabb2b3dae7a66963bee38235d99b4792e24dc018dd0b0a/merged 0x5590d6c33c40 0x5590d6c33c40}  DEBU[2022-04-01T06:09:17.744258295Z] bundle dir created                            bundle=/var/run/docker/containerd/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d module=libcontainerd namespace=moby root=/var/lib/docker/overlay2/e99c57f3e0a991850fabb2b3dae7a66963bee38235d99b4792e24dc018dd0b0a/merged  DEBU[2022-04-01T06:09:17.988922333Z] event                                         module=libcontainerd namespace=moby topic=/tasks/create  DEBU[2022-04-01T06:09:18.001833824Z] event                                         module=libcontainerd namespace=moby topic=/tasks/start  DEBU[2022-04-01T06:09:47.990640595Z] Calling HEAD /_ping  DEBU[2022-04-01T06:09:47.991013563Z] Calling GET /v1.40/containers/json?all=1  

Even after starting, output of sudo netstat -tulpn looks like:

Active Internet connections (only servers)  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name      tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      4541/sshd             tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      4323/master           tcp        0      0 127.0.0.1:39297         0.0.0.0:*               LISTEN      52578/docker-proxy    tcp        0      0 127.0.0.1:41327         0.0.0.0:*               LISTEN      52531/docker-proxy    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      3532/rpcbind          tcp        0      0 127.0.0.1:39635         0.0.0.0:*               LISTEN      52568/docker-proxy    tcp        0      0 127.0.0.1:40853         0.0.0.0:*               LISTEN      4158/containerd       tcp6       0      0 :::22                   :::*                    LISTEN      4541/sshd             tcp6       0      0 :::50051                :::*                    LISTEN      52556/docker-proxy    tcp6       0      0 :::111                  :::*                    LISTEN      3532/rpcbind          tcp6       0      0 :::30001                :::*                    LISTEN      52543/docker-proxy    udp        0      0 0.0.0.0:68              0.0.0.0:*                           4047/dhclient         udp        0      0 0.0.0.0:111             0.0.0.0:*                           3532/rpcbind          udp        0      0 127.0.0.1:323           0.0.0.0:*                           3656/chronyd          udp        0      0 0.0.0.0:677             0.0.0.0:*                           3532/rpcbind          udp6       0      0 :::111                  :::*                                3532/rpcbind          udp6       0      0 ::1:323                 :::*                                3656/chronyd          udp6       0      0 fe80::10cc:ecff:fe9:546 :::*                                4084/dhclient         udp6       0      0 :::677                  :::*                                3532/rpcbind        

Result of docker image inspect redis:

[      {          "Id": "sha256:f1b6973564e91aecb808142499829a15798fdc783a30de902bb0c4133fee19ad",          "RepoTags": [              "redis:latest"          ],          "RepoDigests": [              "redis@sha256:0d9c9aed1eb385336db0bc9b976b6b49774aee3d2b9c2788a0d0d9e239986cb3"          ],          "Parent": "",          "Comment": "",          "Created": "2022-01-26T22:42:40.969131359Z",          "Container": "2552e57869499f961c051f933f396e9a108a328aa50f0527e7b709c8453e2e5d",          "ContainerConfig": {              "Hostname": "2552e5786949",              "Domainname": "",              "User": "",              "AttachStdin": false,              "AttachStdout": false,              "AttachStderr": false,              "ExposedPorts": {                  "6379/tcp": {}              },              "Tty": false,              "OpenStdin": false,              "StdinOnce": false,              "Env": [                  "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",                  "GOSU_VERSION=1.12",                  "REDIS_VERSION=6.2.6",                  "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-6.2.6.tar.gz",                  "REDIS_DOWNLOAD_SHA=5b2b8b7a50111ef395bf1c1d5be11e6e167ac018125055daa8b5c2317ae131ab"              ],              "Cmd": [                  "/bin/sh",                  "-c",                  "#(nop) ",                  "CMD [\"redis-server\"]"              ],              "Image": "sha256:53fbfdd5f8b83eec2c846e7c3f88a4b796ec25ea6c5d6732c3bafaa7e2e8e14a",              "Volumes": {                  "/data": {}              },              "WorkingDir": "/data",              "Entrypoint": [                  "docker-entrypoint.sh"              ],              "OnBuild": null,              "Labels": {}          },          "DockerVersion": "20.10.7",          "Author": "",          "Config": {              "Hostname": "",              "Domainname": "",              "User": "",              "AttachStdin": false,              "AttachStdout": false,              "AttachStderr": false,              "ExposedPorts": {                  "6379/tcp": {}              },              "Tty": false,              "OpenStdin": false,              "StdinOnce": false,              "Env": [                  "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",                  "GOSU_VERSION=1.12",                  "REDIS_VERSION=6.2.6",                  "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-6.2.6.tar.gz",                  "REDIS_DOWNLOAD_SHA=5b2b8b7a50111ef395bf1c1d5be11e6e167ac018125055daa8b5c2317ae131ab"              ],              "Cmd": [                  "redis-server"              ],              "Image": "sha256:53fbfdd5f8b83eec2c846e7c3f88a4b796ec25ea6c5d6732c3bafaa7e2e8e14a",              "Volumes": {                  "/data": {}              },              "WorkingDir": "/data",              "Entrypoint": [                  "docker-entrypoint.sh"              ],              "OnBuild": null,              "Labels": null          },          "Architecture": "amd64",          "Os": "linux",          "Size": 112712915,          "VirtualSize": 112712915,          "GraphDriver": {              "Data": {                  "LowerDir": "/var/lib/docker/overlay2/a12ee374312449d74fca6ef38a854445bf841c53ef4947ebff3cb75361072d68/diff:/var/lib/docker/overlay2/547ac2f21a71cf3db5354bc1b09edf86115f6432e098de29ee4adf223d10911c/diff:/var/lib/docker/overlay2/5b76e0751542d640e474043902100b31b6b5bd681027999cadc72b63530eebc6/diff:/var/lib/docker/overlay2/817c7b0fc802642a6ca3bfce75ebbfa7967e72d40701d7cf97d284adabd88ffd/diff:/var/lib/docker/overlay2/8f75bc8a98ce6ef6ed4c3aa49cf02085a6ed54136daf2ff0bbb4b6305b1c236e/diff",                  "MergedDir": "/var/lib/docker/overlay2/eb8ae711a8095ef1c6947d7cdfb5bac9212bebb27a49cac37927eb9d50e6c6e6/merged",                  "UpperDir": "/var/lib/docker/overlay2/eb8ae711a8095ef1c6947d7cdfb5bac9212bebb27a49cac37927eb9d50e6c6e6/diff",                  "WorkDir": "/var/lib/docker/overlay2/eb8ae711a8095ef1c6947d7cdfb5bac9212bebb27a49cac37927eb9d50e6c6e6/work"              },              "Name": "overlay2"          },          "RootFS": {              "Type": "layers",              "Layers": [                  "sha256:7d0ebbe3f5d26c1b5ec4d5dbb6fe3205d7061f9735080b0162d550530328abd6",                  "sha256:92b6c42121d80f330a80c20afa928e19c31ab3a5fe7cf9c91517fa8cc468b33f",                  "sha256:65845b69eb5c3291dd610ddf2f61f524ab206f9754900d9f3512fcbc2d38604f",                  "sha256:7048818d16571a765e2b0cf82c20d627abebccec73ac3d7b7973501000e6e05d",                  "sha256:c61d5cbf862134aad34822e96d9efc009cca19ad604419cb3f8cf8857eb18372",                  "sha256:ff503dae4eb68eb7a71095e5b1b1b123f42d37e923222038b64fba5a80b13307"              ]          },          "Metadata": {              "LastTagTime": "0001-01-01T00:00:00Z"          }      }  ]  

Output of docker ps -a:

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                            PORTS               NAMES  b169b17dd219        redis                  "docker-entrypoint.s…"   31 seconds ago      Up 29 seconds                                         practical_driscoll  0564d11aeae4        redis                  "docker-entrypoint.s…"   33 minutes ago      Exited (0) About a minute ago                         recursing_clarke  900b852ec798        redis                  "docker-entrypoint.s…"   34 minutes ago      Exited (0) 33 minutes ago                             charming_wright  44d5f3956d9e        redis                  "docker-entrypoint.s…"   39 minutes ago      Exited (0) 34 minutes ago                             tender_ellis  a75ff92a9e6a        redis                  "docker-entrypoint.s…"   39 minutes ago      Exited (0) 39 minutes ago                             amazing_allen  0d18733b003f        redis                  "docker-entrypoint.s…"   41 minutes ago      Exited (0) 41 minutes ago                             clever_shaw  f97c93ca0d51        redis                  "docker-entrypoint.s…"   42 minutes ago      Exited (0) 41 minutes ago                             silly_tereshkova  b10d05112f4e        redis                  "docker-entrypoint.s…"   43 minutes ago      Exited (0) 42 minutes ago                             elegant_cartwright  4200e4305f99        redis                  "docker-entrypoint.s…"   46 minutes ago      Exited (0) About a minute ago                         lucid_meitner  6701dc80f692        redis                  "docker-entrypoint.s…"   48 minutes ago      Exited (0) 46 minutes ago                             hungry_leakey  dab5c8ac4dba        redis                  "docker-entrypoint.s…"   48 minutes ago      Exited (0) 48 minutes ago                             optimistic_mcnulty  01cbbe59cb4a        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) 50 minutes ago                             agitated_wilson  f8061cc7f8d3        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          happy_grothendieck  dadda17513ec        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          focused_ardinghelli  998a9daf5262        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          romantic_banach  05739cc6d02c        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          dazzling_dhawan  68985a25899a        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          clever_euler  be975ad41a79        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          goofy_wescoff  9fd9712c5fc8        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          keen_curie  37d7c4cf2926        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          cranky_fermat  ccf184edccf4        redis                  "docker-entrypoint.s…"   About an hour ago   Exited (0) About an hour ago                          friendly_tereshkova  b884651bcf3b        redis                  "docker-entrypoint.s…"   35 hours ago        Exited (0) About an hour ago                          brave_swanson  af4c2654b9bf        kindest/node:v1.21.1   "/usr/local/bin/entr…"   5 weeks ago         Exited (137) 2 minutes ago                            quickstart-armada-server-worker  a6fc51a4c718        kindest/node:v1.21.1   "/usr/local/bin/entr…"   5 weeks ago         Exited (137) 2 minutes ago                            quickstart-armada-server-control-plane  d2c3e07142fd        redis                  "docker-entrypoint.s…"   5 weeks ago         Exited (255) 4 weeks ago                              angry_napier  dc66ce3ed03b        kindest/node:v1.21.1   "/usr/local/bin/entr…"   5 weeks ago         Exited (137) About a minute ago                       quickstart-armada-executor-1-control-plane  b1898eb8ebc3        kindest/node:v1.21.1   "/usr/local/bin/entr…"   5 weeks ago         Exited (137) 2 minutes ago                            quickstart-armada-executor-1-worker  c49aad627c97        kindest/node:v1.21.1   "/usr/local/bin/entr…"   5 weeks ago         Exited (137) 2 minutes ago                            quickstart-armada-executor-0-control-plane  273ee5313543        kindest/node:v1.21.1   "/usr/local/bin/entr…"   5 weeks ago         Exited (137) 2 minutes ago                            quickstart-armada-executor-0-worker  

Output of docker inspect practical_driscoll:

[      {          "Id": "b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d",          "Created": "2022-04-01T06:09:17.714237408Z",          "Path": "docker-entrypoint.sh",          "Args": [              "redis-server"          ],          "State": {              "Status": "running",              "Running": true,              "Paused": false,              "Restarting": false,              "OOMKilled": false,              "Dead": false,              "Pid": 295188,              "ExitCode": 0,              "Error": "",              "StartedAt": "2022-04-01T06:09:18.001996349Z",              "FinishedAt": "0001-01-01T00:00:00Z"          },          "Image": "sha256:f1b6973564e91aecb808142499829a15798fdc783a30de902bb0c4133fee19ad",          "ResolvConfPath": "/var/lib/docker/containers/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d/resolv.conf",          "HostnamePath": "/var/lib/docker/containers/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d/hostname",          "HostsPath": "/var/lib/docker/containers/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d/hosts",          "LogPath": "/var/lib/docker/containers/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d/b169b17dd219e1833add0244a3780900810a2c43c4f2be63e68d04c3e6163f4d-json.log",          "Name": "/practical_driscoll",          "RestartCount": 0,          "Driver": "overlay2",          "Platform": "linux",          "MountLabel": "",          "ProcessLabel": "",          "AppArmorProfile": "",          "ExecIDs": null,          "HostConfig": {              "Binds": null,              "ContainerIDFile": "",              "LogConfig": {                  "Type": "json-file",                  "Config": {                      "max-file": "10",                      "max-size": "10m"                  }              },              "NetworkMode": "default",              "PortBindings": {                  "6379/tcp": [                      {                          "HostIp": "",                          "HostPort": "6379"                      }                  ]              },              "RestartPolicy": {                  "Name": "no",                  "MaximumRetryCount": 0              },              "AutoRemove": false,              "VolumeDriver": "",              "VolumesFrom": null,              "CapAdd": null,              "CapDrop": null,              "Capabilities": null,              "Dns": [],              "DnsOptions": [],              "DnsSearch": [],              "ExtraHosts": null,              "GroupAdd": null,              "IpcMode": "private",              "Cgroup": "",              "Links": null,              "OomScoreAdj": 0,              "PidMode": "",              "Privileged": false,              "PublishAllPorts": false,              "ReadonlyRootfs": false,  "SecurityOpt": null,              "UTSMode": "",              "UsernsMode": "",              "ShmSize": 67108864,              "Runtime": "runc",              "ConsoleSize": [                  0,                  0              ],              "Isolation": "",              "CpuShares": 0,              "Memory": 0,              "NanoCpus": 0,              "CgroupParent": "",              "BlkioWeight": 0,              "BlkioWeightDevice": [],              "BlkioDeviceReadBps": null,              "BlkioDeviceWriteBps": null,              "BlkioDeviceReadIOps": null,              "BlkioDeviceWriteIOps": null,              "CpuPeriod": 0,              "CpuQuota": 0,              "CpuRealtimePeriod": 0,              "CpuRealtimeRuntime": 0,              "CpusetCpus": "",              "CpusetMems": "",              "Devices": [],              "DeviceCgroupRules": null,              "DeviceRequests": null,              "KernelMemory": 0,              "KernelMemoryTCP": 0,              "MemoryReservation": 0,              "MemorySwap": 0,              "MemorySwappiness": null,              "OomKillDisable": false,              "PidsLimit": null,              "Ulimits": [                  {                      "Name": "memlock",                      "Hard": -1,                      "Soft": -1                  }              ],              "CpuCount": 0,              "CpuPercent": 0,              "IOMaximumIOps": 0,              "IOMaximumBandwidth": 0,              "MaskedPaths": [                  "/proc/asound",                  "/proc/acpi",                  "/proc/kcore",                  "/proc/keys",                  "/proc/latency_stats",                  "/proc/timer_list",                  "/proc/timer_stats",                  "/proc/sched_debug",                  "/proc/scsi",                  "/sys/firmware"              ],              "ReadonlyPaths": [                  "/proc/bus",                  "/proc/fs",                  "/proc/irq",                  "/proc/sys",                  "/proc/sysrq-trigger"              ]          },          "GraphDriver": {              "Data": {                  "LowerDir": "/var/lib/docker/overlay2/e99c57f3e0a991850fabb2b3dae7a66963bee38235d99b4792e24dc018dd0b0a-init/diff:/var/lib/docker/overlay2/eb8ae711a8095ef1c6947d7cdfb5bac9212bebb27a49cac37927eb9d50e6c6e6/diff:/var/lib/docker/overlay2/a12ee374312449d74fca6ef38a854445bf841c53ef4947ebff3cb75361072d68/diff:/var/lib/docker/overlay2/547ac2f21a71cf3db5354bc1b09edf86115f6432e098de29ee4adf223d10911c/diff:/var/lib/docker/overlay2/5b76e0751542d640e474043902100b31b6b5bd681027999cadc72b63530eebc6/diff:/var/lib/docker/overlay2/817c7b0fc802642a6ca3bfce75ebbfa7967e72d40701d7cf97d284adabd88ffd/diff:/var/lib/docker/overlay2/8f75bc8a98ce6ef6ed4c3aa49cf02085a6ed54136daf2ff0bbb4b6305b1c236e/diff",                  "MergedDir": "/var/lib/docker/overlay2/e99c57f3e0a991850fabb2b3dae7a66963bee38235d99b4792e24dc018dd0b0a/merged",                  "UpperDir": "/var/lib/docker/overlay2/e99c57f3e0a991850fabb2b3dae7a66963bee38235d99b4792e24dc018dd0b0a/diff",                  "WorkDir": "/var/lib/docker/overlay2/e99c57f3e0a991850fabb2b3dae7a66963bee38235d99b4792e24dc018dd0b0a/work"              },              "Name": "overlay2"          },          "Mounts": [              {                  "Type": "volume",                  "Name": "6e4cc5e5d43a1e7d3ca4144de24bd9c00733a78083fbba4fe2245a75b3d56440",                  "Source": "/var/lib/docker/volumes/6e4cc5e5d43a1e7d3ca4144de24bd9c00733a78083fbba4fe2245a75b3d56440/_data",                  "Destination": "/data",                  "Driver": "local",                  "Mode": "",                  "RW": true,                  "Propagation": ""              }          ],          "Config": {              "Hostname": "b169b17dd219",              "Domainname": "",              "User": "",              "AttachStdin": false,              "AttachStdout": true,              "AttachStderr": true,              "ExposedPorts": {                  "6379/tcp": {}              },              "Tty": false,              "OpenStdin": false,              "StdinOnce": false,              "Env": [                  "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",                  "GOSU_VERSION=1.12",                  "REDIS_VERSION=6.2.6",                  "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-6.2.6.tar.gz",                  "REDIS_DOWNLOAD_SHA=5b2b8b7a50111ef395bf1c1d5be11e6e167ac018125055daa8b5c2317ae131ab"              ],              "Cmd": [                  "redis-server"              ],              "Image": "redis",              "Volumes": {                  "/data": {}              },              "WorkingDir": "/data",              "Entrypoint": [                  "docker-entrypoint.sh"              ],              "NetworkDisabled": true,              "OnBuild": null,              "Labels": {}          },          "NetworkSettings": {              "Bridge": "",              "SandboxID": "da9cad9507c9374138c65b67b60d1bce45202c09006de923c655f2c26abec96b",              "HairpinMode": false,              "LinkLocalIPv6Address": "",              "LinkLocalIPv6PrefixLen": 0,              "Ports": {},              "SandboxKey": "/var/run/docker/netns/da9cad9507c9",              "SecondaryIPAddresses": null,              "SecondaryIPv6Addresses": null,              "EndpointID": "",              "Gateway": "",              "GlobalIPv6Address": "",              "GlobalIPv6PrefixLen": 0,              "IPAddress": "",              "IPPrefixLen": 0,              "IPv6Gateway": "",              "MacAddress": "",              "Networks": {                  "bridge": {                      "IPAMConfig": null,                      "Links": null,                      "Aliases": null,                      "NetworkID": "",                      "EndpointID": "",                      "Gateway": "",                      "IPAddress": "",                      "IPPrefixLen": 0,                      "IPv6Gateway": "",                      "GlobalIPv6Address": "",                      "GlobalIPv6PrefixLen": 0,                      "MacAddress": "",                      "DriverOpts": null                  }              }          }      }  ]  

As a manager of a Google Shared Drive why can't I add folders?

Posted: 31 Mar 2022 09:33 PM PDT

I have the "manager" role on a google shared drive, yet I am unable to add files or folders to it.

All options are greyed out on the (right click) context menu. If I try to drag and drop - it gives the message, "you need to be a manager on {shared drive} to move to this folder".

I already have the manager role, and can confirm this by checking "manage members".

What could it be?

On Apache how to switch off DirectorySlash only for requests to a specific subdomain?

Posted: 31 Mar 2022 09:08 PM PDT

Served by Apache I'd like on one subdomain site of mine (say sub.mydomain.com) that URLs without trailing slashes point directly (without external redirect) to the index file in the underlying folder. The subdomain requests are internally redirected to a sub-folder. All other URLS should work in the normal Apache way with external redirect to the slashed version.

All the directives have to go in my .htaccess file. For this to work I am planning to do the following:

  1. Switch off DirectorySlash for requests to sub.mydomain.com/...
  2. Rewrite the sub.mydomain.com/… requests to /sub/...
  3. Rewrite slashless directory URLs with /sub/... to fetch the index.html inside the underlying directory

I have a good idea how to do 2. and 3., but how can I issue DirectorySlash off only for requests to sub.mydomain.com, but not to www.mydomain.com or other.mydomain.com?

Trimming the path and redirect in Nginx

Posted: 31 Mar 2022 08:53 PM PDT

I have a Wordpress server at www.mydomain.com/A/B The Nginx config is as:

server {      listen 80 default;        root /var/www/html;      location / {          try_files $uri $uri/ /index.php$is_args$args;      }          location /A/B {          try_files $uri $uri/ /A/B/index.php?$args;      }  ...  }  

This is working fine.

What I want to do now is to redirect a legacy path to the new path.

Basically I want www.mydomain.com/A/B/C/XXX/YYY/ZZZ --> www.mydomain.com/A/B/XXX/YYY/ZZZ. Removing /C.

I believe I could do it with:

location /A/B/C {              try_files $uri $uri/ /A/B/index.php?$args;  }  

But it didn't work. Then I tried

location /A/B/C {      proxy_pass http://localhost/A/B;  # note the trailing slash here, it matters!  }  

I think I maybe need another way since I need the /XXX/YYY path after the /C.

Any help appreciated. Thank you.

Different ping feedback from seemingly identical LAN machines

Posted: 31 Mar 2022 07:22 PM PDT

LAN of several Windows 10 machines is set up as follows

enter image description here

The Internet modem is deliberately powered down to eliminate Internet access. The router 192.168.1.1 is powered up. Two machines are wired by Ethernet cables directly into the router's Ethernet ports and set up with static IPs. An extra machine is connected to the router through WiFi with IP issued by the router's DHCP.

When I do

ping 8.8.8.8 -n 1  

from the first machine (192.168.1.2) I get the following

Pinging 8.8.8.8 with 32 bytes of data:  Request timed out.    Ping statistics for 8.8.8.8:      Packets: Sent = 1, Received = 0, Lost = 1 (100% loss),  

which is what I'd expect. Also, %errorlevel% is set to 1 (failure) after this command.

However, when I run the same command from the other two machines I get

Pinging 8.8.8.8 with 32 bytes of data:  Reply from 192.168.1.1: Destination net unreachable.    Ping statistics for 8.8.8.8:      Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),  

and %errorlevel% remains at 0 (success).

What exactly is going on with these two machines? They seem to report some sort of successful ping. "Sent = 1, Received = 1"? Received from where? Is this somehow normal for Windows ping to report "0% loss" in such situations? How does that agree with the "Destination net unreachable" report? What exactly is it referring to?

And what could be the possible difference between the first machine ("Request timed out") and the remaining ones ("Destination net unreachable")? What should I look for? I don't see any differences to speak of in their ipconfig /all reports.

Testing Regular Expressions

Posted: 31 Mar 2022 10:02 PM PDT

I am trying to learn regular expressions, and came across some examples online. Trying to put things together, I inputted this into bash

^(([a-j][a-j]?)|(3[a-j][a-j])$  

It returns the following error

bash: !!: event not found.

Why do you believe I am getting that? Should I create a to j files? or should I create 1 file with a-j in it? Why is it returning that? Thank you for your help.

Conditional directives based on User-Agent with Apache 2.2.x?

Posted: 31 Mar 2022 05:23 PM PDT

I want to implement something like the following in our Apache httpd configuration:

    <If "%{HTTP_USER_AGENT} !~ /something/">          RemoveEncoding .gz .tgz          AddType application/x-gzip .gz      </If>  

but my understanding is that this conditional <If> syntax only works with Apache 2.4.x. Unfortunately, I'm stuck with Apache 2.2.x for the time being. Is there a way to do this with Apache 2.2.x? Perhaps using BrowserMatch and an environment variable? Thanks!

Using environment file in haproxy container

Posted: 31 Mar 2022 05:09 PM PDT

Im trying, unsuccessfully, to run the official haproxy container (https://hub.docker.com/_/haproxy) with an environment file per (something like this https://www.loadbalancer.org/blog/how-to-install-haproxy-rhel/) to allow me to substitute vars in my haproxy.cfg, example:

## env.txt  node1=www1.domain.com  node2=www2.domain.com  node_port=80  
## haproxy.cfg  global  ...    defaults  ...    frontend somefrontend     default_backend somebackend      backend somebackend     mode http     balance roundrobin     server node1 ${node1}:${node_port}      server node2 ${node2}:${node_port}     

i cant seem to figure out how haproxy is even running on that container to figure out where i would even put the environment file. i found /etc/environment, overrode it w/ an env file and reloaded the config but those vars didnt take.

What im trying to accomplish is having a docker env that if i want i can point a node to our dev server instead of a local container and do so just by editing the environment file. This too would be useful, as i can use the same haproxy.cfg in production as well as locally and the only difference being is the env file.

If I want to use dig/nslookup to query about machines in a VLAN, how can I find which name server to use?

Posted: 31 Mar 2022 05:51 PM PDT

If I want to use dig/nslookup to query about machines in e.g., 38.102.145.0/24, how can I find the name server to use that could resolve machines in that VLAN?

How can I remove an accept-encoding request header in nginx?

Posted: 31 Mar 2022 09:41 PM PDT

The recent update to zlib due to a security hole appears to cause a major problem when serving PHP-FPM 8.0 via nginx on Ubuntu focal. Any requests with a gzip encoding fail right at the start of the response, though nginx logs the requests as successful and the correct size. If I make requests without an Accept-Encoding header, it works perfectly. As a workaround, I'm trying to disable all gzip support, but it seems to be remarkably persistent... So far I have tried these settings in nginx:

gzip off;  fastcgi_buffering off;  add_header Accept-Encoding "";  proxy_set_header Accept-Encoding "";  

and I've also checked that there are no other directives that turn these back on again by grepping nginx -T output.

However, if I dump the request headers from PHP (i.e. after it's been through nginx), I still see this accept header:

Accept-Encoding: deflate, gzip, br, zstd  

so nginx is not stripping it from the request before it's passed through to PHP-FPM. I've tried setting these directives at the server and location levels, with the same results.

In PHP I've disabled all output buffering, but it doesn't appear to be possible to disable zlib without a recompile.

How can I get nginx to strip this request header so that neither nginx nor PHP will compress responses?

nagios-nrpe-server output different vs running locally

Posted: 01 Apr 2022 12:08 AM PDT

To be sure I don't have any double definition of the command, I created a new debug command name in the nrpe config

/etc/nagios/nrpe.d # grep -R debug  debug.cfg:command[debug_check_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 5% -C -w 10000 -c 5000 -p /home -p /  

Executing it via nrpe plugin gives me a warning

/usr/lib/nagios/plugins/check_nrpe -H 127.0.0.1 -c debug_check_disks  DISK WARNING - free space: / 3190413 MB (11% inode=99%); /dev 15889 MB (100% inode=99%); /dev/shm 15921 MB (100% inode=99%); /run 3183 MB (99% inode=99%); /run/lock 5 MB (100% inode=99%); /run/user/0 3184 MB (100% inode=99%); /sys/fs/cgroup 15921 MB (100% inode=99%); /boot 306 MB (66% inode=99%); /tmp 3190413 MB (11% inode=99%); /var/tmp 3190413 MB (11% inode=99%);| /=23828329MB;28436835;28441835;0;28446835 /dev=0MB;12711;15094;0;15889 /dev/shm=0MB;12736;15124;0;15921 /run=0MB;2547;3024;0;3184 /run/lock=0MB;4;4;0;5 /run/user/0=0MB;2547;3024;0;3184 /sys/fs/cgroup=0MB;12736;15124;0;15921 /boot=154MB;388;461;0;486 /tmp=23828329MB;22757468;27024493;0;28446835 /var/tmp=23828329MB;22757468;27024493;0;28446835  

But running it locally reports OK.

sudo -u nagios /usr/lib/nagios/plugins/check_disk -w 20% -c 5% -C -w 10000 -c 5000 -p /home -p /  DISK OK - free space: /dev 15889 MB (100% inode=99%); /run 3183 MB (99% inode=99%); / 3190413 MB (11% inode=99%); /dev/shm 15921 MB (100% inode=99%); /run/lock 5 MB (100% inode=99%); /sys/fs/cgroup 15921 MB (100% inode=99%); /boot 306 MB (66% inode=99%); /run/user/0 3184 MB (100% inode=99%);| /dev=0MB;12711;15094;0;15889 /run=0MB;2547;3024;0;3184 /=23828329MB;28436835;28441835;0;28446835 /dev/shm=0MB;12736;15124;0;15921 /run/lock=0MB;4;4;0;5 /sys/fs/cgroup=0MB;12736;15124;0;15921 /boot=154MB;388;461;0;486 /run/user/0=0MB;2547;3024;0;3184  

nagios-nrpe-server is running under the user nagios (as per default)

ps -ef | grep nagios  nagios     75200       1  0 16:39 ?        00:00:00 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -f  root       75389   71365  0 16:45 pts/0    00:00:00 grep --color=auto nagios  

Any ideas why this discrepancy? Thanks!

The server is running ubuntu 20.04.4 and was originally installed with ubuntu 16.06 and dist-upgraded twice.

Service Account Permissions for Task Scheduler READ

Posted: 31 Mar 2022 09:28 PM PDT

I have a PowerShell script I've written to do a comparison of Scheduled Tasks between two nodes of our application server cluster. It uses this code to query the tasks from a given server...

function getTasks($server) {      return Get-ScheduledTask -CimSession $server |           Where-Object TaskPath -like '*OurFolder*' |           ForEach-Object {              [pscustomobject]@{                   Server = $server                  Path = $_.TaskPath                  Name = $_.TaskName                  Disabled = ($_.State -eq 'Disabled')                  Command = $_.Actions.Execute                  Arguments = $_.Actions.Arguments                  Interval = $_.Triggers.RepetitionInterval                  HashId = "$($_.Actions.Execute)|$($_.Actions.Arguments)"                  HashFull = "$($_.TaskPath)|$($_.TaskName)|$($_.Actions.Execute)|$($_.Actions.Arguments)|$(($_.State -eq 'Disabled'))"              }          }  }  

It works perfect when run under my domain admin account.

However when I try to run it under our service account as a scheduled task, it gets this error when trying to query the scheduled tasks on the other node ...

Get-ScheduledTask : MTG-P-APP1.mtg.local: Cannot connect to CIM server. Access is denied.  At F:\Applications\TaskSchedulerNodeCompare\compare-nodes.ps1:9 char:12  +     return Get-ScheduledTask -CimSession $server |  +            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          : ResourceUnavailable: (MSFT_ScheduledTask:String) [Get-ScheduledTask], CimJobException      + FullyQualifiedErrorId : CimJob_BrokenCimSession,Get-ScheduledTask  

Googling and looking around it LOOKS like the only way to allow an account to access this list would be to add them to the LocalAdmins on the server in question? But it really doesn't feel right to have to make our service account as a local admin, and obviously we don't want to have the task run under my domain admin account.

I've tried solution no. 3 here, which sounds like it would be it...

1.  As an Administrator of the server, go to Server Manager -> Tools -> Computer Management.  On the left expand "Services and Applications" and right click "WMI Control".  Go to "Properties".  2.  In the newly open Window, click on Security tab.  3.  Expand Root tree, and then click on the node CIMV2, and click the button security  4.  In the newly open Window, click the button Advanced.  5.  In the newly open Window, click the button Add under the permission tab.  6.  In the newly open Window, click on "select a principal", then search for the user you that was having the problem.    7.  In the applies to, choose "this namespace and subnamespace".  8.  For the permission, check on "Execute Methods", "Enable Accounts" and "Remote Enable"  9.  Click accept on all the open dialogue boxes  10. Restart WMI services.  As an Admininstrator of the server, go to Server Manager -> Tools -> Computer Management.  On the left expand "Services and Applications" and click on "Services".  Go to "Windows Management Instrumentation" and right click it.  Then choose "Restart".  11. Try the command again. The above directions were adapted from this StackOverflow posting.  

but even after doing all those steps, it still won't work.

How can I allow our service account to query (read-only) the scheduled tasks from our servers, while being as security conscious as possible?

Git over ssh on remote machine [closed]

Posted: 31 Mar 2022 05:53 PM PDT

I have a stationary computer on which I am used to write some programs. I've setup a few git repos on it and it works well. I am able to git pull/push when I'm physically using my computer (I launch terminal, get into a repo folder and run my git command).

For a few weeks I'll be unavailable physically, so I want to use ssh (with the same user as when I physically log in) to use my computer remotely. However I'm not able to use git properly via ssh. I run the following commands to connect on the remote computer:

ssh user@remote-server  cd repository  

When running a git command, I keep getting the following output/error:

git pull   Enter passphrase for key '/home/remote_user/.ssh/id_rsa':  git@github.com: Permission denied (publickey).  

The key "id_rsa" is a key on the remote computer, that I used with git when I was on the computer physically. Any Idea which parameters do I have to set for git command to work?

Thank you!

A TLS fatal alert has been received with exim4 in debian 9

Posted: 31 Mar 2022 10:06 PM PDT

I am trying to configure my server to send mail and I receive an "TLS fatal alert" error every time I try to send mail.

I have followed the steps indicated in this post related to my problem to try to overcome the problem, but it finally gives me the error that I describe:

apt install gnutls-bin  cd /etc/exim4/  certtool --generate-privkey --outfile exim.key  certtool --generate-request --load-privkey exim.key --outfile exim.csr  
  • Common name: gestiondecorreos.es

  • the rest I leave it blank(enter)

  • url: http://www.cacert.org/

  • login to CACert => click on "Server Certificates" => New

  • It will ask you to paste in the certificate request: I paste the content of the exim.csr file.

  • CACert will ask you to confirm the hostname.

  • After that it will show a certificate in the resulting web page. Put the certificate in a new file named exim.crt

    cd /etc/exim4/ chgrp Debian-exim exim.key chmod g+r exim.key vim /etc/exim4/conf.d/main/000_local (new file)

  • and insert inside:

    MAIN_LOG_SELECTOR=+tls_cipher +tls_peerdn MAIN_TLS_ENABLE=t

    update-exim4.conf /etc/init.d/exim4 restart

I try to connect to my mail server by tls:

gnutls-cli -s -p 587 gestiondecorreos.es  ehlo gestiondecorreos.es  starttls  ^D (ctr+d)  
  • the error result:

*** Starting TLS handshake  - Certificate type: X.509  - Got a certificate list of 1 certificates.  - Certificate[0] info:   - subject `EMAIL=eguz*****@gmail.com,CN=server.example.com,OU=IT,O=Vesta Control Panel,L=San Francisco,ST=California,C=US', issuer `EMAIL=eguz*****@gmail.com,CN=server.example.com,OU=IT,O=Vesta Control Panel,L=San Francisco,ST=California,C=US', serial 0x0086e738bec1714309, RSA key 4096 bits, signed using RSA-SHA256, activated `2020-02-04 15:42:00 UTC', expires `2021-02-03 15:42:00 UTC', key-ID `sha256:6095e39dc286060d74d300f494814744d803ad2f5c55587ca38a2d7ed2b58194'     Public Key ID:        sha1:5f4b******************        sha256:6095****************     Public key's random art:        +--[ RSA 4096]----+        |        ..o    .o|        |       .   o   +.|        *******************        |             .oo.|        +-----------------+    - Status: The certificate is NOT trusted. The certificate issuer is unknown. The name in the certificate does not match the expected.  *** PKI verification of server certificate failed...  *** Fatal error: Error in the certificate.  *** Handshake has failed  

I dont know why appear CN=server.example.com like subject.

The /var/log/exim4/mainlog file said:

TLS error on connection from lixxxxxx.members.linode.com ([127.0.0.1]) [xxxxxxxxxxx] (gnutls_handshake): A TLS fatal alert has been received.  

In my linode-vps the main domain is gestiondecorreos.es and orbelanet.com is another domain i am running smtp tests on.

Thanks in advance! Mikel

How to disable TLS 1.0 in Windows Server 2012R2

Posted: 01 Apr 2022 12:06 AM PDT

I have disabled SSL 2.0 and SSL 3.0 in Windows 2012R2 server by going into HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\ and adding entries as shown in the attachment. It is working perfectly fine.

However, it is not the case when am trying to disable TLS 1.0. If I add entries similar to what I have done for SSL 2.0, SSL 3.0, it blocks the port 443. I am not able to get my head around this.

Pictures: TLS 1.0 - Client Key settings TLS 1.0 - Server Key settings

Nmap result with TLS 1.0 in the registry:

nnmap -p 443 --script ssl-enum-ciphers operational-assessment.int.net.xyz.com

Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-02 23:08 India Standard Time    Nmap scan report for operational-assessment.int.net.xyz.com (10.x.x.x)  Host is up (0.040s latency).    PORT    STATE  SERVICE  443/tcp closed https  MAC Address: 00:11:22:33:44:55 (Cimsys)    Nmap done: 1 IP address (1 host up) scanned in 2.23 seconds  

But When I delete the TLS 1.0 entry from the registry, It works fine and says that TLS1.0 is enabled.

NMAP result without TLS1.0 in the registry:

nnmap -p 443 --script ssl-enum-ciphers operational-assessment.int.net.xyz.com

Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-02 22:40 India Standard Time  Nmap scan report for operational-assessment.int.net.xyz.com (10.x.x.x)  Host is up (0.041s latency).    PORT    STATE SERVICE  443/tcp open  https  | ssl-enum-ciphers:   |   TLSv1.0:   |     ciphers:   |       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A  |       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A  |       TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 1024) - A  |       TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 1024) - A  |       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A  |       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A  |       TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C  |       TLS_RSA_WITH_RC4_128_SHA (rsa 2048) - C  |       TLS_RSA_WITH_RC4_128_MD5 (rsa 2048) - C  |     compressors:   |       NULL  |     cipher preference: server  |     warnings:   |       64-bit block cipher 3DES vulnerable to SWEET32 attack  |       Broken cipher RC4 is deprecated by RFC 7465  |       Ciphersuite uses MD5 for message integrity  |       Key exchange (dh 1024) of lower strength than certificate key  

Please let me know if I am doing anything wrong. I have followed a handful of links and all of them suggest the way I have been following already.

Is it possible to switch between AWS accounts without signing out first?

Posted: 31 Mar 2022 07:12 PM PDT

My organisation uses AWS Federation to handle multiple AWS accounts. However, every time I try to log into another account, I get the following error:

You must first log out before logging into a different AWS account.

This requires me to click "Sign out", and sign into the account again. This can become very tedious when often switching between multiple accounts.

Is it possible to switch between accounts without having to sign out first?

How do you restart the network service on Fedora 30?

Posted: 31 Mar 2022 11:48 PM PDT

On previous versions on RHEL/Fedora, the network service could be controlled via init scripts and (later) via systemctl. After updating DNS settings, I want to restart the network service to bounce the interface and pickup the new DNS settings (and force NetworkManager to rewrite /etc/resolve.conf).

Using systemctl, I'm getting:

# systemctl restart network  Failed to restart network.service: Unit network.service not found.  

Where'd the network service go and how do I restart the interface to pickup changes?

Create Google Cloud Managed SSL Certificate for a subdomain

Posted: 31 Mar 2022 08:09 PM PDT

I have my main domain www.example.com hosted on Route 53 on AWS.

I've created the custom domain on Google Cloud sub.example.com and set the appropriate NS records.

What I want to do now is create a new managed SSL certificate for this subdomain as shown below:

enter image description here

Is this possible? Is it good practice given that I want to continue adding more subdomains like sub1.example.com and creating a certificate for each one? Since I am keeping example.com hosted at Route 53, I don't think I can create a single managed SSL certificate for all of the possible subdomains that I may have on Google Cloud?

Add LimitNOFILE on haproxy init script

Posted: 31 Mar 2022 06:06 PM PDT

I want to add open files limit to HAProxy 1.8 process to 1024576. But since I use version 1.8, I cannot add LimitNOFILE to init scripts, instead of systemd file. How can I add limitNOFILE to those processes ?

*P.S: I had changed openfile on /etc/security/limits.conf, /etc/security/limits.d/20-nbproc.conf and also set ulimit -n to 1024576 and sysctl -p fs.file-max=1024576. But when I do "cat /proc/{pid}/limits", the open files still 4096.

Windows Update bypassing server as download source

Posted: 31 Mar 2022 11:01 PM PDT

I have a Windows Server 2008 R2 SP1 machine that is isolated in a DMZ. Historically it has not had issues but everything works before it breaks. The port 8530 is open on the firewall appliance and I can telnet from the client to the server which proves the site is ready and open.

This machine is not attached to the domain so WSUS server is set in the registry. So under HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate I have

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate]  "WUServer"="http://kanwsus2k16:8530"  "WUStatusServer"="http://kanwsus2k16:8530"  "DoNotConnectToWindowsUpdateInternetLocations"=dword:00000001    [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU]  "UseWUServer"=dword:00000001  

The windowsupdate.log corroborates this. I would like to try and include only what is required to try and keep the post length down. The client reaches out to the server and see that it has X available updates. However it fails to download those. The log shows entries like this:

2018-05-07  11:05:19:960     668    47c DnldMgr BITS job {7835096F-E02C-4B66-AD0F-3D71EF17C73B} hit a transient error, updateId = {3FD57624-1808-41C7-979D-8606CA1229B6}.202, error = 0x80072EE2  ... output truncated ....  2018-05-07  11:05:40:963     668    47c Misc    WARNING: SendRequest failed with hr = 80072ee2. Proxy List used: <(null)> Bypass List used : <(null)> Auth Schemes used : <>  2018-05-07  11:05:40:963     668    47c Misc    WARNING: WinHttp: SendRequestUsingProxy failed for <http://wsus.ds.download.windowsupdate.com/d/msdownload/update/software/secu/2018/04/windows6.1-kb4093118-x64-express_c1473ce4b149cf34239c364a9787030447e376ca.cab>. error 0x80072ee2  

With regards to the SendRequestUsingProxy failed, that should fail. The server does not have access to Microsoft websites so it will be blocked from being able to go there. What I can't figure out is why it isnt getting the updates from the WSUS server directly. We do not use a proxy nor is one configured.

On the WSUS Server side of things I see that it get a download failed status for each of the updates. So in short the communication is there but the client is trying to download the updates from externally. It is a 2k16 server and reading the logs with Get-WindwosUpdateLog has not proven useful.

This is the only external server I have to the network so I do not have any comparison systems to know exactly where the system is.

In an attempt to testing connectivity to the server I try to browse to http://kanwsus2k16:8530/selfupdate/wuident.cab which is met with page cannot be displayed on the client server. (That link works fine on the internal network)

Why is my Windows Update client not honoring the WSUS path for updates and instead attempting to go externally for Microsoft?


Other things I have tried:

  • System Update Readiness Tool for Windows Server 2008 R2 x64 Edition
  • Clearing BITS Queue
  • Renaming SoftwareDistribution folder
  • Verified nothing is being blocked from the networking side going to WSUS server on port 8530
  • Added DoNotConnectToWindowsUpdateInternetLocations equal to 1 in HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate

multiple ipv6 routers on the same physical network, how to get it working?

Posted: 31 Mar 2022 10:06 PM PDT

I have multiple Internet routers from the same provider (I'll name "boxes" the Internet routers), all linked to the same hardware network switch forming a physical network (I'll name "NML" that physical network for "No Man's Land") on which are connected two routers (I'll name them "routers") doing routing and firewall tasks for a private LAN:

_____________________________________________ WAN (Internet)      |           |           |           |      |           |           |           |   ___|___     ___|___     ___|___     ___|___  [       ]   [       ]   [       ]   [       ]  [ Box 1 ]   [ Box 2 ]   [ Box 3 ]   [ Box 4 ]  [ 0:A:1 ]   [ 0:B:1 ]   [ 0:C:1 ]   [ 0:D:1 ]  [_______]   [_______]   [_______]   [_______]      |           |           |           |      |           |           |           |  ____|___________|___________|___________|____ NML (A physical network            |                       |                for boxes and routers)            |                       |        ____|____               ____|____        [         ]             [         ]       [ Router1 ]             [ Router2 ]       [  0:A:2  ]             [  0:A:3  ]       [  0:B:2  ]             [  0:B:3  ]       [  0:C:2  ]             [  0:C:3  ]       [  0:D:2  ]             [  0:D:3  ]       [_________]             [_________]            |                       |            |                       |  __________|_______________________|__________ LAN (Where people are working)  

On the ipv4 side, NML is a local network using an ipv4 private class, it works.

On the ipv6 side, each boxes share the same /64 prefix and each router get an auto-configured ipv6 address from each box (each router get four ipv6, one per box).

To make it simple you can imagine ipv6 addresses with 3 chars:

  • 0:A:1: 0 prefix from provider network prefix, A prefix from first box network prefix, 1 suffix from first box address;
  • 0:A:2: 0 prefix from provider network prefix, A prefix from first box network prefix, 1 suffix from first router getting address from first box;
  • 0:C:1: 0 prefix from provider network prefix, C prefix from third box network prefix, 1 suffix from third box address;
  • 0:C:3: 0 prefix from provider network prefix, C prefix from third box network prefix, 3 suffix from second router getting address from third box.

So each router get four ipv6 addresses, one per box.

  • From router 1 I can ping6 every boxes using their ipv6 addresses (0:A:1, 0:B:1, 0:C:1, 0:D:1);
  • From router 2 I can ping6 every router2 ipv6 addresses (0:A:3, 0:B:3, 0:C:3, 0:D:3), the opposite is true;
  • From both router 1 and router 2 I can't ping6 an ipv6 address, even when I add an explicit route to one box for that ipv6 address (ip -6 route add something via 0:A:1), either using box's ipv6 address from provider prefix either using box's link-local address.

At first only one box had ipv6 activated and at this time I was able to query the Internet using ipv6, but since I activated ipv6 on all the boxes I'm not able to query the Internet using ipv6 anymore. If I do a traceroute6 to an Internet address from a router, it never goes beyond the box.

Note that I don't need to do ipv6 stuff from LAN through the routers at this time, I only need routers to be able to do Internet stuff using ipv6 (mainly to build some VPN over ipv6).

The Internet routers (named "boxes") are property of ISP and the only option I have is an "enable IPv6" checkbox on the customer page, I don't have any access to router configuration itself and no one other option than enabling or disabling ipv6. The routers between LAN and NML (named "routers") are standard Debian systems running on some x86-based networking hardware. On a Debian point of view you can imagine it's like a PC : I can do whatever a standard Debian can do.

So, two questions:

  • Is the setup correct and expected to work? If yes, where can I look for to find the problem?
  • Is the setup incorrect and not expected to work at all? If yes, what can I do to fix it?

I removed some "expires" information and stuff like that to remove some verbosity, plus I sometime added some leading 0 to align addresses for easier reading, then replaced some bits to use the ipv6 example class.

# my boxes' ipv6 addresses  box0    2001:db8:ee84:2180::1  box1    2001:db8:ee84:21c0::1  box2    2001:db8:2f13:1ea0::1  box3    2001:db8:399a:08f0::1  box4    2001:db8:399a:39e0::1    # my router1's ipv6 addresses  # ip -6 addr show dev eth2  4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000      inet6 2001:db8:ee84:2180:200:24ff:fed1:3d9e/64 scope global mngtmpaddr dynamic      inet6 2001:db8:ee84:21c0:200:24ff:fed1:3d9e/64 scope global mngtmpaddr dynamic      inet6 2001:db8:2f13:1ea0:200:24ff:fed1:3d9e/64 scope global mngtmpaddr dynamic       inet6 2001:db8:399a:08f0:200:24ff:fed1:3d9e/64 scope global mngtmpaddr dynamic       inet6 2001:db8:399a:39e0:200:24ff:fed1:3d9e/64 scope global mngtmpaddr dynamic       inet6 fe80::200:24ff:fed1:3d9e/64 scope link     # my router2's ipv6 addresses  ip -6 addr show dev eth2  4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000      inet6 2001:db8:ee84:2180:200:24ff:fed1:6336/64 scope global mngtmpaddr dynamic       inet6 2001:db8:ee84:21c0:200:24ff:fed1:6336/64 scope global mngtmpaddr dynamic       inet6 2001:db8:2f13:1ea0:200:24ff:fed1:6336/64 scope global mngtmpaddr dynamic       inet6 2001:db8:399a:08f0:200:24ff:fed1:6336/64 scope global mngtmpaddr dynamic       inet6 2001:db8:399a:39e0:200:24ff:fed1:6336/64 scope global mngtmpaddr dynamic       inet6 fe80::200:24ff:fed1:6336/64 scope link     # default ipv6 routes on router1  # ip -6 route  2001:db8:ee84:2180::/64 dev eth2  proto kernel  metric 256  2001:db8:ee84:21c0::/64 dev eth2  proto kernel  metric 256  2001:db8:2f13:1ea0::/64 dev eth2  proto kernel  metric 256  2001:db8:399a:08f0::/64 dev eth2  proto kernel  metric 256  2001:db8:399a:39e0::/64 dev eth2  proto kernel  metric 256  fe80::/64 dev eth2  proto kernel  metric 256   default via fe80::e69e:12ff:fe04:286f dev eth2  proto ra  metric 1024 hoplimit 64  default via fe80::e69e:12ff:fe03:8b35 dev eth2  proto ra  metric 1024 hoplimit 64  default via fe80::e69e:12ff:fe02:10de dev eth2  proto ra  metric 1024 hoplimit 64  default via fe80::0224:d4ff:fea7:f258 dev eth2  proto ra  metric 1024 hoplimit 64  default via fe80::0224:d4ff:febb:af9e dev eth2  proto ra  metric 1024 hoplimit 64    # default ipv6 routes on router2  # ip -6 route  2001:db8:ee84:2180::/64 dev eth2  proto kernel  metric 256  2001:db8:ee84:21c0::/64 dev eth2  proto kernel  metric 256  2001:db8:2f13:1ea0::/64 dev eth2  proto kernel  metric 256  2001:db8:399a:08f0::/64 dev eth2  proto kernel  metric 256  2001:db8:399a:39e0::/64 dev eth2  proto kernel  metric 256  fe80::/64 dev eth2  proto kernel  metric 256   default via fe80::e69e:12ff:fe03:8b35 dev eth2  proto ra  metric 1024 mtu 1480 hoplimit 64  default via fe80::0224:d4ff:febb:af9e dev eth2  proto ra  metric 1024 hoplimit 64  default via fe80::e69e:12ff:fe02:10de dev eth2  proto ra  metric 1024 hoplimit 64  default via fe80::e69e:12ff:fe04:286f dev eth2  proto ra  metric 1024 hoplimit 64  default via fe80::0224:d4ff:fea7:f258 dev eth2  proto ra  metric 1024 hoplimit 64    # neigh from router1  # ip -6 neigh | grep eth2  fe80::224:d4ff:febb:af9e dev eth2 lladdr 00:24:d4:bb:af:9e router STALE  fe80::200:24ff:fed1:6336 dev eth2 lladdr 00:00:24:d1:63:36 STALE  fe80::e69e:12ff:fe04:286f dev eth2 lladdr e4:9e:12:04:28:6f router REACHABLE  2001:db8:2f13:1ea0::1 dev eth2 lladdr 00:24:d4:bb:af:9e router STALE  fe80::e69e:12ff:fe02:10de dev eth2 lladdr e4:9e:12:02:10:de router STALE  2001:db8:399a:39e0:200:24ff:fed1:6336 dev eth2 lladdr 00:00:24:d1:63:36 STALE  2001:db8:399a:8f0:200:24ff:fed1:6336 dev eth2 lladdr 00:00:24:d1:63:36 STALE  2001:db8:399a:39e0::1 dev eth2 lladdr 00:24:d4:a7:f2:58 router STALE  2001:db8:399a:8f0::1 dev eth2 lladdr e4:9e:12:02:10:de router STALE  2001:db8:399a:39e0:: dev eth2  FAILED  fe80::213:46ff:fe8f:1e4a dev eth2 lladdr 00:13:46:8f:1e:4a STALE  2001:db8:399a:8f0:: dev eth2  FAILED  fe80::e69e:12ff:fe03:8b35 dev eth2 lladdr e4:9e:12:03:8b:35 router STALE  fe80::224:d4ff:fea7:f258 dev eth2 lladdr 00:24:d4:a7:f2:58 router STALE  fe80::8226:89ff:fe2d:b3d3 dev eth2 lladdr 80:26:89:2d:b3:d3 STALE  fe80::20a:f7ff:fe12:e77 dev eth2 lladdr 00:0a:f7:12:0e:77 STALE  2001:db8:ee84:2180::1 dev eth2 lladdr e4:9e:12:03:8b:35 router STALE  fe80::21d:9ff:fe2c:628d dev eth2 lladdr 00:1d:09:2c:62:8d STALE    # get from router1  # ip -6 route get 2001:4860:4860::8888  2001:4860:4860::8888 from :: via fe80::e69e:12ff:fe04:286f dev eth2  proto ra  src 2001:db8:399a:39e0:200:24ff:fed1:3d9e  metric 1024  hoplimit 64    # neigh from router2  # ip -6 neigh | grep eth2  2001:db8:399a:8f0:200:24ff:fed1:3d9e dev eth2 lladdr 00:00:24:d1:3d:9e STALE  2001:db8:399a:8f0::1 dev eth2 lladdr e4:9e:12:02:10:de router STALE  fe80::e69e:12ff:fe04:286f dev eth2 lladdr e4:9e:12:04:28:6f router STALE  fe80::224:d4ff:fea7:f258 dev eth2 lladdr 00:24:d4:a7:f2:58 router DELAY  fe80::e69e:12ff:fe02:10de dev eth2 lladdr e4:9e:12:02:10:de router REACHABLE  fe80::200:24ff:fed1:3d9e dev eth2 lladdr 00:00:24:d1:3d:9e STALE  fe80::224:d4ff:febb:af9e dev eth2 lladdr 00:24:d4:bb:af:9e router STALE  2001:db8:399a:39e0:200:24ff:fed1:3d9e dev eth2 lladdr 00:00:24:d1:3d:9e STALE  fe80::e69e:12ff:fe03:8b35 dev eth2 lladdr e4:9e:12:03:8b:35 router REACHABLE    # get from router2  ip -6 route get 2001:4860:4860::8888  2001:4860:4860::8888 from :: via 2001:db8:399a:8f0::1 dev eth2  src 2001:db8:399a:39e0:200:24ff:fed1:6336  metric 1024  

Some traceroute example:

# traceroute from router1  # traceroute 2001:4860:4860::8888  traceroute to 2001:4860:4860::8888 (2001:4860:4860::8888), 30 hops max, 80 byte packets   1  2001:db8:ee84:21c0::1 (2001:db8:ee84:21c0::1)  26.146 ms  27.728 ms  28.507 ms  

Robocopy - Copy a single file from a directory and overwrite a file in a destination directory if its newer

Posted: 31 Mar 2022 08:09 PM PDT

All, I've come across an issue with deploying a time sheet to users. I've researched robocopy a little bit and think it might be a solution.

I need to overwrite a copy of the time sheet located on the public desktop of each user every time I make changes to it. It has become a hassle navigating to each user's public desktop, primarily the ones connected through VPN on a poor connection.

Is there a way to copy the time sheet from a directory on a server and then overwrite the old copy on the users machine and attach it to a scheduled task so I don't have to reach out to each user every time I update the time sheet?

Access remote VLAN over IPsec VPN using Zyxel routers

Posted: 31 Mar 2022 06:06 PM PDT

I have a central site with a Zyxel Zywall 310 and a remote site with a Zyxel USG 20w. I also have a working IPsec VPN between the two sites.

PCs on LAN1 of the remote site can access Server1 on LAN1 of the central site, but not Server2 on VLAN4 of the central site.

What rules would I need to add to allow PCs at the remote site (behind the USG 20w) to access Server2 on VLAN4 at the central site (behind the Zywall 310)?

Here's what the network looks like:

enter image description here

I suspect the solution may involve either Policy or Static Route rules (I currently have none set, though I've tinkered with them a bit, but was unable to get anything working).

Processes spawning randomly and sucking CPU

Posted: 31 Mar 2022 11:01 PM PDT

I am currently on Ubuntu 16.04, and I have noticed slowdowns across the server in general. Upon viewing htop, I noticed that processes with random commands are spawning, while taking the CPU usage with it; Here is the image that shows an offending process. When trying to view which user started the process, the pts shows as a '?' as shown below:

# ps -feww | grep netstat  root      7444     1 91 01:29 ?        00:01:37 netstat -antop  root     13051     1  0 01:31 ?        00:00:00 netstat -antop  root     13063     1  0 01:31 ?        00:00:00 netstat -antop  

I successfully killed the process with signal 9, but after a few seconds, another process with a completely different command pops up, and ran until I killed it. Rebooting the server did not fix this.

Would appreciate some advice on this, thanks!

Change TMP/TEMP variables for a Domain Service Account

Posted: 01 Apr 2022 12:06 AM PDT

I want to change the TMP and TEMP variables for a Domain Service Account

Normally, for local users I can change this variable via regedit > HKEY_USERS > SID number of account

But for the Domain Service Account I can't find the Sid number in HKEY_USERS.

How can I change these variables for such an account?

postfix deliveries per connection

Posted: 31 Mar 2022 10:16 PM PDT

hope you can assist me in this case.

I am administrating an Postfix server which is used for newsletters. Recently one of the major recipient domains changed their policies to only accept one email per smtp session/connection. To adhere to their policy I found the following settings in main.cf to be kind of useful as they refer to concurrency of email delivery, though it doesn't seem to help.

(I've tested with values as low as 1)

  • initial_destination_concurrency
  • default_destination_concurrency_limit
  • smtp_destination_concurrency_limit

The error I am facing is: dsn=4.4.2, status=deferred, along with a link telling me to send a single email per SMTP connection.

Postfix version: 2.9.6

Any suggestions will be appreciated!

How to configure CentOS Iptables without getting locked out

Posted: 31 Mar 2022 07:40 PM PDT

I am trying to apply these firewall rules:

/sbin/iptables -F  /sbin/iptables -X  /sbin/iptables -Z  /sbin/iptables -P INPUT DROP  /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT  /sbin/iptables -A INPUT -p tcp ! --syn -j REJECT --reject-with tcp-reset  /sbin/iptables -A INPUT -m state --state INVALID -j DROP  /sbin/iptables -P OUTPUT DROP  /sbin/iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT  /sbin/iptables -A OUTPUT -p tcp ! --syn -j REJECT --reject-with tcp-reset  /sbin/iptables -A OUTPUT -m state --state INVALID -j DROP  /sbin/iptables -P FORWARD DROP  /sbin/iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT  /sbin/iptables -A FORWARD -p tcp ! --syn -j REJECT --reject-with tcp-reset  /sbin/iptables -A FORWARD -m state --state INVALID -j DROP  /sbin/iptables -A INPUT -i lo -j ACCEPT  /sbin/iptables -A OUTPUT -o lo -j ACCEPT  /sbin/iptables -A FORWARD -i lo -o lo -j ACCEPT  /sbin/iptables -t mangle -F  /sbin/iptables -t mangle -X  /sbin/iptables -t mangle -Z  /sbin/iptables -t mangle -P PREROUTING ACCEPT  /sbin/iptables -t mangle -P OUTPUT ACCEPT  /sbin/iptables -t mangle -P INPUT ACCEPT  /sbin/iptables -t mangle -P FORWARD ACCEPT  /sbin/iptables -t mangle -P POSTROUTING ACCEPT  /sbin/iptables -t nat -F  /sbin/iptables -t nat -X  /sbin/iptables -t nat -Z  /sbin/iptables -t nat -P PREROUTING ACCEPT  /sbin/iptables -t nat -P OUTPUT ACCEPT  /sbin/iptables -t nat -P POSTROUTING ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 12443 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 11443 -j DROP  /sbin/iptables -A INPUT -p tcp --dport 11444 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 8447 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 8443 -j DROP  /sbin/iptables -A INPUT -p tcp --dport 8880 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT  /sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 21 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 22 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 587 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 25 -j ACCEPT  /sbin/iptables -A INPUT -p tcp --dport 465 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 110 -j ACCEPT  /sbin/iptables -A INPUT -p tcp --dport 995 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 143 -j ACCEPT  /sbin/iptables -A INPUT -p tcp --dport 993 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 106 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 3306 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 5432 -j DROP    /sbin/iptables -A INPUT -p tcp --dport 9008 -j DROP  /sbin/iptables -A INPUT -p tcp --dport 9080 -j DROP    /sbin/iptables -A INPUT -p udp --dport 137 -j DROP  /sbin/iptables -A INPUT -p udp --dport 138 -j DROP  /sbin/iptables -A INPUT -p tcp --dport 139 -j DROP  /sbin/iptables -A INPUT -p tcp --dport 445 -j DROP    /sbin/iptables -A INPUT -p udp --dport 1194 -j DROP          /sbin/iptables -A INPUT -p tcp --dport 26 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 53 -j ACCEPT  /sbin/iptables -A INPUT -p udp --dport 53 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 2095 -j ACCEPT    /sbin/iptables -A INPUT -p tcp --dport 2096 -j ACCEPT    /sbin/iptables -A INPUT -p udp --dport 465 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 25 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 26 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 37 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 43 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 113 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 465 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 873 -j ACCEPT    /sbin/iptables -A OUTPUT -p tcp --dport 2089 -j ACCEPT    /sbin/iptables -A OUTPUT -p udp --dport 53 -j ACCEPT    /sbin/iptables -A OUTPUT -p udp --dport 465 -j ACCEPT    /sbin/iptables -A OUTPUT -p udp --dport 873 -j ACCEPT          /sbin/iptables -A INPUT -p icmp --icmp-type 8/0 -j DROP    /sbin/iptables -A INPUT -j DROP    /sbin/iptables -A OUTPUT -j ACCEPT    /sbin/iptables -A FORWARD -j DROP  

However, when I copy and paste them into the command line I get locked out of the server (of course since the rules are being applied line by line). How do I apply these rules all at once since /sbin/iptables -P INPUT DROP is executed first, but the line to allow shell access comes after (I have removed this to protect my IP.

No comments:

Post a Comment