Thursday, May 19, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Backup /var/lib/docker without the images?

Posted: 19 May 2022 07:51 AM PDT

I want to make a backup of all my containers and volumes, so the easiest way would be to copy /var/lib/docker to another location.

However this directory also includes all the images, and I don't want to include them since they all can easily be re-downloaded from public sources.

So how can I copy this directory while excluding the images?

Expose pfsense port on windows hyper-v

Posted: 19 May 2022 07:49 AM PDT

Need advice, i have pfsense server who running on Hyper-V host with this topology:

  • Hyper-V host have 2 network interface:

    1. Interface with public IP

    2. Interface with Private IP who comunicate with Pfsense VMs

Internet --> Hyper-V Host --> Pfsense VMs --> VM A  

Is there a way to expose web apps port (80/443) on VM A so i can access it directly from Public IP Hyper-V Host ?

Appreciate all answer

mariadb-client cannot connect to db throwing "RSA Encryption not supported"

Posted: 19 May 2022 07:37 AM PDT

I have a docker-compose with a 'db' and 'web' containers. The db is a mysql:8.0 image, and the web is a python:3.9-slim.

If I try to connect to the MySQL server inside the db container, it works. But not if I try it inside the web container, from where I get the following error:

root@c08888899ca9:/local/app# mysql -h db -u root -p123qwe  ERROR 2061 (HY000): RSA Encryption not supported - caching_sha2_password plugin was built with GnuTLS support  

The mysql clients differ between containers: the db client uses the community-mysql client:

mysql  Ver 8.0.28 for Linux on x86_64 (MySQL Community Server - GPL)  

while the web container client uses a mariadb-client:

mysql  Ver 15.1 Distrib 10.3.34-MariaDB, for debian-linux-gnu (x86_64)  

And, the server version is:

mysql> SELECT VERSION();  +-----------+  | VERSION() |  +-----------+  | 8.0.28    |  +-----------+  

Any ideas on how to solve the "caching_sha2_password plugin" error

Many thanks in advanced

configure inactivity timeout while handling keepalive probes

Posted: 19 May 2022 06:46 AM PDT

I am developing a TCP echo server using python and the socket library.

I'd like to have a timeout configured for each incoming connection. So to drop and close them if there is inactivity for a SOCK_TIMEOUT value.

This is achieve with the specific setting: client_sock.settimeout(SOCK_TIMEOUT)

At the same time I would like to maintain active the connection that use keepalive method. So, if a keepalive probe packet is received by the server from a given client I'd like to avoid the use of timeout to close this particular client/connection.

Q1 >> Does this make sense?

I'd say this should be feasible. However, the socket server is unable to handle, as I desired, the keepalive probe. Because, even if those are acknowledged by the server, ACK is returned to each probe, the connection is closed at timeout.

Q2 >> Should I change the timeout implementation?

How to validate variables contents in Ansible?

Posted: 19 May 2022 07:44 AM PDT

ansible-lint only checks the tasks/handlers and doesn't iterate over the variables (e.g. if you're using with_items, it won't iterate over all the items) and yamllint only checks cosmetic issues and is hard to customized with custom rules.

Is there a tool that can validate the actual data in the variables in YAML files before they are fed into Ansible?

Examples:

  • A given variable cannot contain a specific string
  • Variable user_ssh_key fed to authorized_keys cannot have a comment
  • Variable ssh_enabled fed to service module cannot be True
  • and so on...

Nginx - React app make request with the wrong host

Posted: 19 May 2022 06:21 AM PDT

I am new to docker and nginx so excuse me if I am saying something silly

I am using nginx with docker and my react app is trying to make a request to my node backend app. But something goes wrong, the react app its making the request to http://localhost:3335/getFileLogs

But the correct would be to use https://localhost:8443/api-adminportal/getFileLogs where 8443 is the port that the api server is listening on

How can I make nginx make requests to https:locahost:8443/api-adminportal/ instead of http://locahost:3335/

My nginx default conf file:

server {      listen 80;      server_name localhost 127.0.0.1 0.0.0.0;      return 301 https://$server_name$request_uri;  }          server {      listen 443 ssl;      server_name localhost 127.0.0.1 0.0.0.0;        ssl_certificate /etc/nginx/cert/cert.pem;      ssl_certificate_key /etc/nginx/cert/key.pem;        location / {          proxy_pass http://fpr-frontend:3000/;      }          # include /etc/nginx/conf.d/koma/*.conf;        error_page   500 502 503 504  /50x.html;      location = /50x.html {          root   /usr/share/nginx/html;      }  }    server {      listen 8443 ssl;      server_name localhost 127.0.0.1 0.0.0.0;        ssl_certificate /etc/nginx/cert/cert.pem;      ssl_certificate_key /etc/nginx/cert/key.pem;        location /adminportal/ {          proxy_pass http://fpr-adminportal:3001/adminportal/;      }        location /api-adminportal/ {          proxy_pass https://fpr-backend:3335/;      }        location /api-portal/ {          proxy_pass https://fpr-backend:3333/;      }        # include /etc/nginx/conf.d/koma/*.conf;        error_page   500 502 503 504  /50x.html;      location = /50x.html {          root   /usr/share/nginx/html;      }    }    

My docker compose file:

version: "3.2"    volumes:    mongodata:    services:    fpr-backend:      build:        context: .        dockerfile: Dockerfile.backend      image: fpr-backend      env_file: ./config/fpr-backend.env      environment:        FRONTEND_URL: https://localhost        FRONTEND_ADMIN_PORTAL_URL: https://localhost:8442/adminportal            depends_on:        - mongo      expose:        - "3333"        - "3335"      ports:        - "3333:3333/tcp"        - "3335:3335/tcp"      command: ["yarn", "start"]      fpr-frontend:      build:        context: .        dockerfile: Dockerfile.frontend        args:         VITE_BACKEND_URL: https://localhost:8443/api-portal/      image: fpr-frontend      env_file: ./config/fpr-frontend.env      depends_on:        - fpr-backend      expose:        - "3000"      command: ["yarn", "serve", "--host=fpr-frontend", "--port=3000"]      fpr-adminportal:        stdin_open: true # docker run -i      tty: true        # docker run -t      build:        context: .        dockerfile: Dockerfile.adminportal        args:          PORT: 3001          REACT_APP_API_URL: https://localhost:8443/api-adminportal/          image: fpr-adminportal      env_file: ./config/fpr-adminportal.env      environment:         - PUBLIC_URL=https://localhost:8442/adminportal/      depends_on:        - fpr-backend      expose:        - "3001"      command: ["yarn","dev"]        mongo:      image: mongo      env_file: ./config/mongo.env      volumes:        - mongodata:/etc/mongo        nginx:      image: nginx      depends_on:        - fpr-backend        - fpr-frontend        - fpr-adminportal      volumes:        - ./config/nginx/conf.d/:/etc/nginx/conf.d/        - ./modules/fpr-backend/certificates/UserPortal:/etc/nginx/cert      expose:        - "443"        - "8443"      ports:        - "8000:80/tcp"        - "443:443/tcp"        - "8443:8443/tcp"  

Windows DEL command behavior wrt junction points

Posted: 19 May 2022 07:26 AM PDT

In my installer script I want to delete known files from known locations on the local PC using the DEL command. The command should purge the file from a certain folder and all subfolders below that. I therefore use:

cd /d "C:\MyFolder"  del /f /s /q MyFile.xyz  

However, if a junction is mapped somewhere below "C:\MyFolder" (say, at "C:\MyFolder\Junction", pointing to another folder on the same drive), DEL doesn't seem to traverse into it at all. So all "MyFile.xyz" files under there will not be deleted. If DEL also cannot find the file anywhere else under the root folder, it'll also happily report "Could Not Find C:\MyFolder\MyFile.xyz".

There doesn't seem to be any switches that control this behavior, nor do command extensions help -- is this a known limitation of DEL?

Are there any workarounds using either commands or standard apps installed by default on fresh contemporary Windows machines, or should I write my own DEL-like executable for this / perform the same action using a script in my installer?

What does it means "Signal lost for 15 minutes on 'Low Application Throughput'"

Posted: 19 May 2022 05:14 AM PDT

I've recently implemented the New Relic service in my website, and I keep getting this error

Signal lost for 15 minutes on 'Low Application Throughput'

enter image description here

And here is more information when clicking on "Issue Payload"

{    "timestamp": 1652961056996,    "title": "[\"Signal lost for 15 minutes on 'Low Application Throughput'\"]",    "mergeReason": "",    "status": "OPEN",    "unAcknowledgedBy": null,    "labels.originalAccountIds": "[\"302995\"]",    "totalIncidents": 1,    "realTimestamp": 1652961056996,    "entities": "[{\"id\":\"MzAyOTk1fEFQTXxBUFBMSUNBVElPTnwxNDk5NDMzNTQ2\",\"name\":\"PHP Application\",\"type\":\"Query\"}]",    "priority": "CRITICAL",    "parentMergeId": null,    "acknowledgedAt": null,    "correlationRuleDescriptions": null,    "unAcknowledgedAt": null,    "labels.policyIds": "[\"2766024\"]",    "closedAt": null,    "updatedAt": 1652961056996,    "correlatedBy": null,    "nrAccountId": 302995,    "accumulations": "{\"source\":[\"newrelic\"],\"origin\":[\"newrelic\"],\"conditionName\":[\"Low Application Throughput\"],\"policyName\":[\"Golden Signals\"],\"conditionFamilyId\":[\"25523832\"],\"policy.rollupStrategy\":[\"PER_CONDITION_AND_TARGET\"],\"conditionProduct\":[\"NRQL\"],\"evaluation.thresholdDurationSeconds\":[\"900000\"],\"tag.appName\":[\"PHP Application\"],\"tag.entity.guid\":[\"MzAyOTk1fEFQTXxBUFBMSUNBVElPTnwxNDk5NDMzNTQ2\"],\"tag.accountId\":[\"302995\"],\"tag.nr.dt.enabled\":[\"true\"],\"tag.language\":[\"php\"],\"tag.instrumentation.name\":[\"apm\"],\"tag.trustedAccountId\":[\"302995\"],\"tag.account\":[\"ligadelconsorcista_1\"],\"tag.instrumentation.provider\":[\"newRelic\"]}",    "labels.accountIds": "[\"302995\"]",    "closedBy": null,    "mutingState": "NOT_MUTED",    "createdAt": 1652946637188,    "activatedAt": 1652946656941,    "labels.priority": "[\"3\"]",    "isCorrelated": false,    "labels.conditionId": "[\"26351474\"]",    "labels.aggregationKeys": "[\"17ae5b94b1dcfe316a5278a7e4ee44d7b2f302f4\"]",    "isIdle": true,    "issueId": "3373da7f-eb66-4539-abea-6d164c3181a9",    "description": "[\"Policy: 'Golden Signals'. Condition: 'Low Application Throughput'\"]",    "incidentIdsEventId": "0bb59808-fb78-4a95-84c8-ce3010114acc",    "correlationRuleNames": null,    "acknowledgedBy": null,    "dataMLModules": "{\"components\":[\"application\"],\"golden-signals\":[\"traffic\"]}",    "triggerEvent": "DEACTIVATED",    "sources": "[\"newrelic\"]",    "realIssueCount": 1,    "correlationRuleIds": null,    "annotations": "{\"description\":[\"Policy: 'Golden Signals'. Condition: 'Low Application Throughput'\"],\"title\":[\"Signal lost for 15 minutes on 'Low Application Throughput'\"],\"degradationStartTime\":[\"1652945725161\"],\"recoveryStartTime\":[\"-1\"],\"wildcard\":[\"MzAyOTk1fEFQTXxBUFBMSUNBVElPTnwxNDk5NDMzNTQ2_PHP Application\"]}",    "labels": "{\"priority\":[\"3\"],\"accountId\":[\"302995\"],\"originalAccountId\":[\"302995\"],\"policyId\":[\"2766024\"],\"conditionId\":[\"26351474\"],\"aggregationKey\":[\"17ae5b94b1dcfe316a5278a7e4ee44d7b2f302f4\"],\"entityId\":[\"MzAyOTk1fEFQTXxBUFBMSUNBVElPTnwxNDk5NDMzNTQ2\"],\"entityName\":[\"PHP Application\"],\"entityType\":[\"Query\"],\"conditionFamilyId\":[\"25523832\"],\"violationId\":[\"2858161508\"],\"nrIncidentId\":[\"701468387\"]}",    "state": "ACTIVATED",    "incidentIds": "[\"2eba1b7b-36e3-4df5-9790-58d5f6c23908\"]"  }  

Is this an issue with my hosting or my php app? Any pointer or advice is going to be greatly appreciated, as I'm completely lost here.

Why "vlan: 3 parent interface: en0"

Posted: 19 May 2022 05:07 AM PDT

I have created a VLAN:

Linux

vconfig add en0 3  ip addr add 192.168.126.5/24 dev en0.3  ip link set up en0.3  can be translated to macOS by e.g.  

MacOS

ifconfig vlan0 create  ifconfig vlan0 vlan 3 vlandev en0  ifconfig vlan0 inet 192.168.126.5 netmask 255.255.255.0  

And I can now see, from ip link show that I have a VLAN saying:

vlan: 3 parent interface: en0  

Why 3 as I have only one physical enterface - also stated "en0"?

List "hardware" Network Interfaces Controllers

Posted: 19 May 2022 08:16 AM PDT

When using ifconfig or ip link show commands, it lists not only hardware interfaces, but also software interfaces - I would expect only the physically interfaces?

ip link show

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384      options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>      nd6 options=201<PERFORMNUD,DAD>  gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280  stf0: flags=0<> mtu 1280  anpi2: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether ...      nd6 options=201<PERFORMNUD,DAD>      media: none      status: inactive  anpi1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether ...      nd6 options=201<PERFORMNUD,DAD>      media: none      status: inactive  anpi0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether ...      nd6 options=201<PERFORMNUD,DAD>      media: none      status: inactive  en4: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether ...      nd6 options=201<PERFORMNUD,DAD>      media: none      status: inactive  en5: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether ...      nd6 options=201<PERFORMNUD,DAD>      media: none      status: inactive  en7: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether ...      nd6 options=201<PERFORMNUD,DAD>      media: none      status: inactive  en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500      options=460<TSO4,TSO6,CHANNEL_IO>      ether ...      media: autoselect <full-duplex>      status: inactive  en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500      options=460<TSO4,TSO6,CHANNEL_IO>      ether ...      media: autoselect <full-duplex>      status: inactive  en3: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500      options=460<TSO4,TSO6,CHANNEL_IO>      ether ...      media: autoselect <full-duplex>      status: inactive  ap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether ...      nd6 options=201<PERFORMNUD,DAD>      media: autoselect      status: inactive  en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>      ether <...>      nd6 options=201<PERFORMNUD,DAD>      media: autoselect      status: active  awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether <...>      nd6 options=201<PERFORMNUD,DAD>      media: autoselect      status: active  llw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether <...>      nd6 options=201<PERFORMNUD,DAD>      media: autoselect      status: active  bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=63<RXCSUM,TXCSUM,TSO4,TSO6>      ether <...>      Configuration:          id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0          maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200          root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0          ipfilter disabled flags 0x0      member: en1 flags=3<LEARNING,DISCOVER>              ifmaxaddr 0 port 10 priority 0 path cost 0      member: en2 flags=3<LEARNING,DISCOVER>              ifmaxaddr 0 port 11 priority 0 path cost 0      member: en3 flags=3<LEARNING,DISCOVER>              ifmaxaddr 0 port 12 priority 0 path cost 0      nd6 options=201<PERFORMNUD,DAD>      media: <unknown type>      status: inactive  utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380      nd6 options=201<PERFORMNUD,DAD>  utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000      nd6 options=201<PERFORMNUD,DAD>  utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1000      nd6 options=201<PERFORMNUD,DAD>  utun3: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380      nd6 options=201<PERFORMNUD,DAD>  utun4: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380      nd6 options=201<PERFORMNUD,DAD>  utun5: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380      nd6 options=201<PERFORMNUD,DAD>  utun6: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380      nd6 options=201<PERFORMNUD,DAD>  vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1496      options=6063<RXCSUM,TXCSUM,TSO4,TSO6,PARTIAL_CSUM,ZEROINVERT_CSUM>      ether <...>      vlan: 3 parent interface: en0      media: autoselect      status: active  en6: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=6467<RXCSUM,TXCSUM,VLAN_MTU,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>      ether <...>      nd6 options=201<PERFORMNUD,DAD>      media: autoselect (1000baseT <full-duplex>)      status: active  en8: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether <...>      nd6 options=201<PERFORMNUD,DAD>      media: autoselect (100baseTX <full-duplex>)      status: active  en10: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500      options=400<CHANNEL_IO>      ether <...>      nd6 options=201<PERFORMNUD,DAD>      media: autoselect      status: active  

Is VLAN "running/managed" on a computer accessable from outside

Posted: 19 May 2022 05:38 AM PDT

VLAN

I(VLAN newbie) am trying to separate my local home network and I thought that VLAN was handled by the router. My router is a sagemcom and I don't think VLAN is supported.

I found that I can create a VLAN on a computer. I am wondering if I can portforward to this computer, running a VLAN on my local network - to keep users from breaking out to other computers - or is this only to create a virtual interface able to connect to a VLAN?

I found the terminal commands for creating VLANS on a computer, both for Linux and MacOS:

Linux

vconfig add en0 3  ip addr add 192.168.126.5/24 dev en0.3  ip link set up en0.3  can be translated to macOS by e.g.  

MacOS

ifconfig vlan0 create  ifconfig vlan0 vlan 3 vlandev en0  ifconfig vlan0 inet 192.168.126.5 netmask 255.255.255.0  

This gives me ifconfig vlan0:

vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1496      options=6063<RXCSUM,TXCSUM,TSO4,TSO6,PARTIAL_CSUM,ZEROINVERT_CSUM>      ether xx:xx:xx:xx:xx:xx       inet 192.168.126.5 netmask 0xffffff00 broadcast 192.168.126.255      vlan: 3 parent interface: en0      media: autoselect      status: active  

Property understanding(please comment):

Flags and options: ?  ether: is the MAC address of my computer, and not a virtual MAC address?  inet: The gateway address to this VLAN?  - netmask: A subnet of available(range) of ip addresses?  - broadcast: sending to this ip address will brodcast to all on the VLAN?  vlan: 3 parent interface: en0: is the physical Network Interface  media: ?  Active: if it is up or down?  

Question(Please comment on any of my assumptions)

  1. Will this computer VLAN "vlan0" be accessable on other computer on my router?
  2. How to access this network?
  3. If I portforward my router to the "inet" ip address web server, will it then be safe from breaking out of this VLAN?
  4. Obviously the computer VLAN will only be available as long as the computer is running, unless this "manager" can be distributed?

What is a good indicator that a server does not have enough RAM?

Posted: 19 May 2022 07:40 AM PDT

I am the "application owner" of a server (ie. I am responsible for running and maintaining the application running on the server).

The VM has 8GB of RAM, as was recommended when the application was first installed, but in the latest version recommendations, 32GB are indicated.

The ops team is relunctant to quadruple the RAM, as the server is only using 30 to 40% of its 8GB of RAM today.

Is used RAM the only useful metric to decide whether a server needs more RAM, or is there any other parameters I should look at to make the call?

Some of my emails do not get to clients without a warning

Posted: 19 May 2022 08:00 AM PDT

I send emails from the web interface of Gmail Workspace (business email). Most of the clients receive my letters just fine. But some of the clients only receive messages with text and images.

Messages with links or PDF attachments do not get through. They do not return or end up in the client's spam folder. They silently go nowhere.

What could be the reason for this and how can I increase the chance to be delivered for my emails?

I have DMARC, SPF, and DKIM set up for my domain. According to the reports I receive, my emails pass the tests. I see no errors or failures for the non-delivered emails.

I tried to send all emails in plain text, but it looks like it does not affect the delivery.

EDIT:

According to Google Workspace Email Log Search all my messages get to client servers. They blocked there before dispatching to the recipients.

I checked my domain and Google Safe Browsing and Spamhaus reported no issues.

All the clients that do not receive some of my emails are using outlook.com / MS Exchange.

I sent a copy of a blocked message to my own account at Office 365 and it got to Quarantine. So, the issue is 100% reproducible now.

I checked the headers of the quarantined email and see this

X-Forefront-Antispam-Report:   CIP:209.85.166.47;CTRY:US;LANG:en;SCL:9;SRV:;IPV:NLI;SFV:SPM;H:mail-io1-f47.google.com;PTR:mail-io1-f47.google.com;CAT:AMP;SFS:(13230001)(356005)(9686003)(336012)(86362001)(55446002)(83380400001)(42186006)(7636003)(5660300002)(7596003)(6666004)(224303003)(6916009)(1096003)(26005)(966005);DIR:INB;  X-Microsoft-Antispam: BCL:0;  X-Microsoft-Antispam-Message-Info: /* skipped */  

I spent some time decoding these and it looks like spam filtering marked the message as High confidence spam (SCL:9). The category of protection policy, applied to the message is Anti-malware (CAT:AMP)

So it is clear now why message do not get to clients. But I still would like to know how to make them get there.

Dynamically register hostnames on DNS server (via DHCP)

Posted: 19 May 2022 08:18 AM PDT

I want to set up a small network, where a central DHCP server leases IPv4 addresses to the clients. The clients already have their hostnames set and should advertise those to the central DNS server, so both the server and all clients can find each other with that hostname. The DNS server will resolve LAN addresses of the domain "my.domain" and point towards an external DNS server for all other domains (internet).

In my current setup, I have two boxes: 10.0.100.1 is the server (Ubuntu 22.04), where DHCP and DNS are hosted. 10.0.100.2 is configured as a client (Fedora 35) (DHCP sends this fixed IP during my test phase).

This is the client (10.0.100.2) configuration:

$ cat /etc/hostname  clienthost    $ cat /etc/systemd/network/20-wired.network  [Match]  Name=enp0s31f6    [Network]  LinkLocalAddressing=ipv4  DHCP=ipv4  SendHostname=true    [DHCPv4]  UseDomains=true    $ resolvectl  Global      Protocols: LLMNR=resolve -mDNS -DNSoverTLS DNSSEC=no/unsupported  resolv.conf mode: stub    Link 2 (enp0s31f6)    Current Scopes: DNS LLMNR/IPv4      Protocols: +DefaultRoute +LLMNR -mDNS -DNSoverTLS DNSSEC=no/unsupported  Current DNS Server: 10.0.100.1      DNS Servers: 10.0.100.1      DNS Domain: my.domain  

The IP 10.0.100.2 is correctly assigned. The client can ping the server (10.0.100.1) with its IP, hostname or FQDN. I can also see in tcpdump that the hostname is sent to the DHCP server (option 81 Client FQDN). So far so good.

The DHCP server config is supposed to be changed once the initial setup is working, towards handing out IPs from a range. So in the future I won't have fixed-assigned IP addresses for the clients. I will skip showing the rndc key files here. They are identical and placed in the configured locations. The server is configured as follows:

$ cat /etc/hostname  serverhost    $ cat /etc/systemd/network/20-wired.network  [Match]  Name=enp0s31f6    [Network]  LinkLocalAddressing=ipv4  Address=10.0.100.1/16  Gateway=10.0.1.1  DNS=10.0.100.1    [DHCPv4]  UseDomains=my.domain    $ cat /etc/default/isc-dhcp-server  INTERFACESv4="enp0s31f6"    $ cat /etc/dhcp/dhcpd.conf  include "/etc/dhcp/ddns-keys/my-domain.key";  default-lease-time 7200;  max-lease-time 28800;  ddns-updates on;  ddns-update-style standard;  ddns-domainname "my.domain.";  allow-unknown-clients;  authoritative;  zone my.domain. {      primary 10.0.100.1;      key ddns-mydomain;  }    zone 10.0.in-addr.arpa. {      primary 10.0.100.1;      key ddns-mydomain;  }    # only serve the single client box specifically during test phase  subnet 10.0.0.0 netmask 255.255.0.0 {}  host testhost {    hardware ethernet 00:00:00:00:00:00;    fixed-address 10.0.100.2;    option subnet-mask 255.255.0.0;    option routers 10.0.1.1;    option domain-name-servers 10.0.100.1;    option domain-name "my.domain";    filename "pxelinux.0";  }    $ cat /etc/bind/named.conf  include "/etc/bind/keys/my.domain.key";  include "/etc/bind/named.conf.options";  include "/etc/bind/named.conf.local";  include "/etc/bind/named.conf.default-zones";    $ cat /etc/bind/named.conf.options  acl "internal" {      127.0.0.1;      10.0.0.0/16;  };    options {      directory "/var/cache/bind";        recursion yes;      allow-recursion { internal; };      listen-on { 10.0.100.1; };      allow-transfer { none; };        allow-query { internal; };      allow-query-cache { internal; };        forwarders {          1.1.1.1;      };        listen-on-v6 { any; };  };    $ cat /etc/bind/named.conf.local  zone "my.domain" {      type master;      file "/etc/bind/zones/db.my.domain";      update-policy { grant ddns-mydomain name my.domain ANY; };      allow-transfer { none; };  };    zone "0.10.in-addr.arpa" {      type master;      file "/etc/bind/zones/db.0.10";      update-policy { grant ddns-mydomain name my.domain ANY; };      allow-transfer { none; };  };    $ cat /etc/bind/zones/db.my.domain  $TTL    86400  @   IN  SOA serverhost.my.domain. admin.my.domain. (                    3     ; Serial                28800     ; Refresh                 3600     ; Retry                28800     ; Expire                43200 )   ; Negative Cache TTL  ;    ; name servers - NS records      IN  NS  serverhost.my.domain.    ; A records  serverhost.my.domain.   IN  A   10.0.100.1    $ cat /etc/bind/zones/db.10.0  $TTL    86400  @   IN  SOA serverhost.my.domain. admin.my.domain. (                    3     ; Serial                28800     ; Refresh                 3600     ; Retry                28800     ; Expire                43200 )   ; Negative Cache TTL  ;    ; name servers - NS records      IN  NS  serverhost.my.domain.    ; PTR records  100.1   IN  PTR serverhost.my.domain.   ; 10.0.100.1  

I think that should be all relevant configuration. Please let me know if you need something else.

The issue here is that, being on 10.0.100.1 (serverhost) I can only ping clienthost via its IP 10.0.100.2 but neither by its hostname nor FQDN. Unfortunately, I don't have a good idea where to start debugging to see if the client hostname is sent to the DNS server and registered or not.

Maybe a potentially unrelated side note: Running the command dhcp-list-lease on server-host returns an empty list. The logs show an DHCPACK for 10.0.100.2 but it never shows up in this particular output (which would have been interesting, because there is a "hostname" column).

Edit: It looks like the key might be important after all. Originally I manually created a key with rndc-confgen -a -b 512, then copied that file to /etc/dhcp/rndc-keys/. Currently, I generated a new key with ddns-confgen -a -b 512 and placed the key both in /etc/bind/keys/my.domain.key and in /etc/dhcp/ddns-keys/my.domain.key (and updated the include statements in the respective configuration files). I still have the rndc key under /etc/bind/rndc.key which is also picked up by bind9 as the logs show.

Edit2: Manually running nsupdate looks like the following:

$ nsupdate -D -k /etc/bind/keys/my.domain.key  > update add clienthost.my.domain 7200 A 10.0.100.2  > send  [...]  Reply from update query:  ;; ->>HEADER<<- opcode: UPDATE, status: REFUSED, id:  39064  ;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1  ;; ZONE SECTION:  ;my.domain.         IN  SOA    ;; TSIG PSEUDOSECTION:  ddns-mydomain.      0   ANY TSIG    hmac-sha256. 1652972427 300 32 4e/XXXXXXXXXXXXXXXXXXXXXXXX/bmg= 39064 NOERROR 0  

And during the manual update the logs show

client @0x7f61d8004cb8 10.0.100.1#39791/key ddns-mydomain: updating zone 'my.domain/IN': update failed: rejected by secure update (REFUSED)  

How to adjust SELinux to allow not so large file downloads in Apache?

Posted: 19 May 2022 07:52 AM PDT

I have a centos 7 server running Apache 2.4 that will happily allow users to download files until they get to a certain size. I've noticed the problem with mp4 video files; I host both low and full resolution files on the site. The low res files are usually less than 5 MB but the full res files can exceed 30 MB. The same script processes and copies them to the website and I can verify all the file permissions are the same. If I change SELinux to setenforce=0 the files will download without issue. While SELinux is enforcing, apache returns a Forbidden error instead.

Any thoughts on what SELinux policy I need to adjust?

Unchecking the "Register this connection's addresses in DNS" option does not remove DNS records

Posted: 19 May 2022 06:24 AM PDT

On Windows Server 2016 I have two NICs, one of which has multiple IP addresses (172.x) and the other that has just one (192.x). On the NIC that's 172.x, I've unchecked the "Register this connection's addresses in DNS" checkbox in the DNS settings (see picture below). However, when I go to DNS Manager and check the entries for my domain, that server has the IP addresses for both NICs appearing in the list. I would expect the ones for the NIC that has "Register this connection's addresses in DNS" unchecked to not appear there.

I followed the instructions on the MS support page here (restarting the DNS client service, since the server has static IPs) but that didn't work.

Has anyone else come across this issue and know of a working solution?

unchecked box screenshot

Openstack / Linux Networking - public network doesnt connect with physical interface

Posted: 19 May 2022 07:05 AM PDT

I'm currently setting up Openstack with kolla-ansible wallaby, version 12.3.1.dev95, all-in-one installation. \

My setup in VMWare:
Workstation 14.x
VM-OS: Ubuntu Server20.04 LTS
1 Bridge-Mode network interface
2 private Host-only networks (1 with DHCP network 203.1.2.0/25; range 203.1.2.1-203.1.2.126).
All networks are attached to the openstack-VM (ens33, ens34 and ens35).
ens34: 203.1.2.4/24
ens35: no IP assigned

The configuration for the globals.yml:
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
network_interface: ens34
neutron_external_interface: ens35
kolla_internal_vip_address: "203.1.2.4"
enable_haproxy: "no"
nova_console: "spice"

My Problem:
After setting up a public network with the following command

openstack network create --external --provider-physical-network physnet1 \  --provider-network-type flat public  

and a subnet

openstack subnet create --no-dhcp \  --allocation-pool start=203.1.2.150,end=203.1.2.199 --network public \  --subnet-range 203.1.2.128/25 public-subnet  

The public network is connected to a router (203.1.2.176) which should be pingable but isn't. Because the standard rules should allow that (from my point of view).


EDIT: I looked a little bit deeper and found that i could ping anything from the openstack network namespace. How do I connect my second physical interface with the namespace, in a way that this interface represents my public network?

Example: I have network namespace A with veth A1. A1 has all networking attached to it (floating ips, router, etc.). Now, what i want to do is that my second physical interface has these information attached to it.

If you need more information, I'm happy to provide it :)

PS: I hope this is somehow an acceptable description of my problem ^^'

All tasks in Task scheduler are going to queued state when triggered

Posted: 19 May 2022 05:03 AM PDT

recently we have a strange problem with scheduled tasks on Windows server 2019 with RDS role installed. 6 servers were restored from 3 months old backup, joined into the AD domain again and working as session hosts correctly, but none of the tasks in Task scheduler (which ran previously and are running on other SH's which weren't restored) is working no more.

When you run the task manually, everything is working ok, but when you set it to some time, it state turn to Queued and don't execute. We tried to create new tasks, delete all tasks and create brand new, but nothing helped. It's not a problem of task settings, so please don't advise to run new instance in parallel or something similar simple. The same settings are working on the servers which weren't restored.

We tried to look in the registry and in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\State ImageState is value IMAGE_STATE_COMPLETE and in HKEY_LOCAL_MACHINE\System\Setup\ChildCompletion\audit.exe has value 0 and oobebeldr.exe is set to 3.

Servers are configured and customers are working on them, so reinstall is the last option. Will sysprep without generalize help here? Or something else? Thank you.

Tape drive Connectivity - Fiber channel Vs SAS

Posted: 19 May 2022 05:19 AM PDT

We want to change our tape backup system and acquire a LTO8 tape drive.

Something that I cannot find the answer easily is what are the pro&con of the connectivty by Fiber channel or SAS.

For the moment I have find that :

  • SAS 6Gb max // Direct connection less latency // Need SAS card on the server (or hypervisor)
  • Fiber 10Gb max // can be shared by several Servers // A virtual server can be used

Our network for internal server is 10Gb so It's tempting to take the fiber option.

Do I forget something ?


Edit : Our Configuration is a 10 Gb Network dedicated (No VLAN tags) Admin Lan mixed in Rj45 and FC (which means only NAS servers // Replication server // Backup server // switches).

So the tape drive will be connected directly to a Switch//Server not a SAN.

Traefik is getting "404 Page not found" in AWS

Posted: 19 May 2022 08:03 AM PDT

I installed my Traefik with default files from: https://docs.traefik.io/routing/providers/kubernetes-crd/#configuration-examples

My ingressroute is looking like that:

apiVersion: traefik.containo.us/v1alpha1  kind: IngressRoute  metadata:    annotations:    name: traefik-test-ingressroute    namespace: default  spec:    entryPoints:    - traefik    routes:    - kind: Rule      match: Host(`test.domain.com`)       services:      - name: whoami        port: 80  

In dashboard rule is looks correctly. Its finding all endpoints and is signed as "Success". But when i put domain "test.domain.com" to my browser it is getting me 404. I using this domain with ip of AWS loadbalancer created by Traefik service in my /etc/hosts.

Traffik is reaching Traefik because in logs im getting such log on every connection try:

172.20.59.64 - - [29/Mar/2020:22:19:47 +0000] "GET / HTTP/2.0" - - "-" "-" 190 "-" "-" 0ms  172.20.59.64 - - [29/Mar/2020:22:19:49 +0000] "GET / HTTP/2.0" - - "-" "-" 191 "-" "-" 0ms  172.20.59.64 - - [29/Mar/2020:22:19:49 +0000] "GET / HTTP/2.0" - - "-" "-" 192 "-" "-" 0ms  172.20.59.64 - - [29/Mar/2020:22:19:49 +0000] "GET / HTTP/2.0" - - "-" "-" 193 "-" "-" 0ms  172.20.59.64 - - [29/Mar/2020:22:19:49 +0000] "GET / HTTP/2.0" - - "-" "-" 194 "-" "-" 0ms  172.20.59.64 - - [29/Mar/2020:22:21:09 +0000] "GET / HTTP/2.0" - - "-" "-" 195 "-" "-" 0ms  

Change Registry Key Permissions Access Control List using only Command Prompt

Posted: 19 May 2022 06:02 AM PDT

I am trying to change the Access Control permissions on a specific registry key i'm generating using a batch file. I try using regini.exe to pull the configuration from a .ini file and run into issues.

I keep getting this error:

    Z:\EM\Pre>regini.exe RegistryPermissions.ini       REGINI: CreateKey (\HKEY_CURRENT_CONFIG\Software\E) relative to handle (000000000) failed - 161      REGINI: Failed to load from file 'RegistryPermissions.ini' (161)  

This is the contents of my .ini file RegistryPermissions.ini:

Computer\HKEY_CURRENT_CONFIG\Software\E [1 7]  

This is the batch script i'm writing to solve a problem:

@echo off  :: ==========================================  :: Set E Key  :: ==========================================  :: Date   : 11 October 2019  :: Author :   :: Modified Date:   :: Modified By:   ::   :: Script Details:  :: --------------  ::  This script will:  ::  + add the E Registry key to HKCC\Software  ::  + set the Key permissions to allow "Everyone" full control  ::  + reboot PC  :: ===========================================      ::***************************************************************  :: Add E Registry Key to HKCC\Software                  *  ::***************************************************************  REG ADD HKCC\Software\E    ::***************************************************************  :: Set the Key to permissions to allow Everyone full control    *  ::***************************************************************  =====This is where I need help=====      ::***************************************************************  :: Reboot PC                            *  ::***************************************************************        goto end    :end  

I have removed some unnecessary sections of the script. The important part is changing the permissions on a registry key, with cmd.

Office 365 Shared mailbox calendar ignoring explicit permissions, users see Default only

Posted: 19 May 2022 06:02 AM PDT

I have a shared mailbox in Office 365 with a shared calendar. Users are granted Publishing Author permission to the calendar folder using Exchange Online PowerShell and these permissions are confirmed using Outlook during troubleshooting.

The problem is that a new user we're setting up can only see the Default permissions, despite being granted the same permissions as everyone else. The user hasn't signed in yet so I'm able to log in to OWA using their account and it shows only the free/busy status for this calendar. If I update the Default permission to show full details, their OWA calendar view updates immediately to reflect the change. But changing their explicit permissions (Publishing Author, Editor, Publishing Editor) makes no difference at all. SharingPermissionFlags is $null for all users with access rights (including Default). So far no other users have reported any problems viewing or accessing the calendar, so this appears isolated to this one new user.

Based on my testing, I don't think this is an issue with folder permissions differing from calendar permissions, though it certainly looks like it. The behavior is exactly as though OWA/Exchange Online doesn't even recognize the user has explicit permissions at all. I conclude this because changing the permissions on the Default user affect this user's view.

In the below (sanitized) screenshot, the first user after Anonymous is unable to view any calendar item details, they can only see availability. All other users have access as expected. Once I set the Default permission to "Reviewer", they can see all details and interact with the calendar as expected. These are Office 365 mailboxes and both the target calendar and the user have Office 365 E1 licenses.

powershell results

Something else that is extra weird is that when I set the Default permission to "AvailabilityOnly", this user cannot view or interact with the calendar beyond free/busy status. However, when I set the Default permission to Reviewer, this user can fully interact with the calendar with the explicit PublishingAuthor permission we've granted. If I set the Default permission back to AvailabilityOnly, the user again cannot view or interact with the calendar beyond seeing free/busy status.

Has anyone else experienced this and been able to resolve it?

listing parent interface of a vlan

Posted: 19 May 2022 05:26 AM PDT

I have a setup with a bunch of vlan interfaces on a physical interface.

Physical interface: eth1  VLANS on top of this: vlan1, vlan2, vlan3  

Now, I want to know which is the parent interface of my vlan (for example, here eth1 is the parent interface of these vlans).

I can get this information by running "ip addr show vlan-name" and then in output, I will get vlan1@eth1, but I need to parse the output of this command or by looking at my network config file, parsing it and interpreting it.

Is there another way by which I can get this information without any parsing logic? For example, for bonded interfaces, the information is present in /sys/class/net/ directory and one can simply read files there.

# cat /sys/class/net/bond0/bonding/slaves  eth0 eth1  

Is there a similar path/file available for vlan tagged interfaces? I couldn't figure out if there is some file I can just read without any parsing and extract this information or any command/utility that just gives the parent interface name.

Kindly do let me know if there are other alternatives to this.

Thanks.

Install memcache php ext on php 5.6

Posted: 19 May 2022 07:06 AM PDT

I have php 5.6.6 installed on Amazon Linux. I want to install memcache extension (not memcached server, we use Elasticache). I try

# yum install php-pecl-memcache.x86_64  

And get the following error:

Error: php56-common conflicts with php-common-5.3.29-1.7.amzn1.x86_64  

So, is there any way to install memcache for my php 5.6 extension? If not, what should I do? Downgrade to php 5.3? Thanks.

nfs problems: shares appear to be the wrong size. files created on share not visible on server

Posted: 19 May 2022 07:06 AM PDT

I have set up shares on an NFS server. I can mount the shares with no error. The share sizes reported by "df" are much smaller than the share size on the server eg. server reports 1 TB but the share looks like 3.8 G from the clients. I can create a test file on the nfs share from a client, and this test file is visible from all clients, but when I go to the shared directory on the server, the file is not there. Similarly, files that pre-exist on the server, are not visible to any clients. On the server, I ran the command "updatedb" and searched for the newly created test file; however, it is not found anywhere on the server. So, I am accessing some share, and I can create files on the share from the client, but can't see these files anywhere on the server. I see no significant nfs related errors in /var/log/messages. The server is CentOS 5.8. The clients are CentOS 6.4. Iptables is turned off on both server and clients for testing.

I don't see any issues with name resolution or DNS.

server:

[root@vmappp04 /]# cat /etc/exports  /data       192.168.1.0/24(fsid=0,rw,sync,no_root_squash)    [root@vmappp04 /]# rpm -qa |grep nfs-utils  nfs-utils-1.0.9-66.el5  nfs-utils-lib-1.0.8-7.9.el5  nfs-utils-lib-1.0.8-7.9.el5    [root@vmappp04 /]# rpm -qa |grep nfs4-acl-tools  nfs4-acl-tools-0.3.3-3.el5    [root@vmappp04 /]# rpm -qa |grep portmap  portmap-4.0-65.2.2.1  

There is no hosts.allow or hosts.deny file existing on the server.

client:

cat /etc/fstab  vmappp04:/  /data/filer_01  nfs4    noauto,defaults 0 0     [root@vmappp11 ~]# rpm -qa |grep nfs-utils  nfs-utils-lib-1.1.5-6.el6.x86_64  nfs-utils-1.2.3-36.el6.x86_64    [root@vmappp11 ~]# rpm -qa |grep nfs4-acl-tools  nfs4-acl-tools-0.3.3-6.el6.x86_64  

portmap is not installed on clients

output from mount command on client appears correct:

[root@vmappp11 ~]# showmount -e vmappp04  vmappp04:/ on /data/filer_01 type nfs4 (rw,addr=192.168.1.16,clientaddr=192.168.1.84)  

Export list for vmappp04:

[root@vmappp11 ~]# showmount -d vmappp04  /data 192.168.1.0/24  

Directories on vmappp04:

[root@vmappp11 ~]# showmount -a vmappp04  

All mount points on vmappp04:

[root@vmappp11 ~]# showmount -a 192.168.1.16  

All mount points on 192.168.1.16:

I've tried all kinds of permutations on the server and client side. Unsure how to proceed, please advise; much obliged for any assistance.

Mysql 5.5 is not installed in CentOS 5.7

Posted: 19 May 2022 05:03 AM PDT

I am using CentOS 5.7 with 64 bit. In my machine already have MySQL 5.0.88 version. Now I want to upgrade MySQL to 5.5 version. I followed this link to start my installation process. When i give "yum --enablerepo=remi,remi-test list mysql mysql-server" it's ouput like

     -> yum --enablerepo=remi,remi-test list mysql mysql-server   Loaded plugins: dellsysid, fastestmirror   Loading mirror speeds from cached hostfile    * base: ftp.iitm.ac.in    * epel: buaya.klas.or.id     * extras: ftp.iitm.ac.in    * remi: remirpm.mirror.gymkl.ch   * remi-test: remirpm.mirror.gymkl.ch    * rpmforge: mirror.oscc.org.my    * updates: ftp.iitm.ac.in remi             | 2.5 kB     00:00        remi-test                                | 2.5 kB     00:00        Available Packages mysql.i386                       5.0.95-5.el5_9                                                 updates             mysql.x86_64                                                        5.5.30-1.el5.remi                                       remi         mysql- server.x86_64                                                 5.5.30-1.el5.remi                                       remi  

When I run "yum --enablerepo=remi,remi-test install mysql mysql-server" command it returns

-> yum --enablerepo=remi,remi-test install mysql mysql-server  Loaded plugins: dellsysid, fastestmirror  Loading mirror speeds from cached hostfile   * base: ftp.iitm.ac.in   * epel: ftp.jaist.ac.jp   * extras: ftp.iitm.ac.in   * remi: mirror5.layerjet.com   * remi-test: mirror5.layerjet.com   * rpmforge: kartolo.sby.datautama.net.id   * updates: ftp.iitm.ac.in  Setting up Install Process  Package mysql is obsoleted by MySQL-server-community, trying to install MySQL-server-community-5.0.88-0.rhel5.x86_64 instead  Package MySQL-server-community-5.0.88-0.rhel5.x86_64 already installed and latest version  Package mysql is obsoleted by MySQL-server-community, trying to install MySQL-server-community-5.0.88-0.rhel5.x86_64 instead  Package MySQL-server-community-5.0.88-0.rhel5.x86_64 already installed and latest version  Package mysql-server is obsoleted by MySQL-server-community, trying to install MySQL-server-community-5.0.88-0.rhel5.x86_64 instead  Package MySQL-server-community-5.0.88-0.rhel5.x86_64 already installed and latest version  Nothing to do  

It seems link 5.0 is the latest. Please help me how to upgrade MySQL 5.0 to 5.5

IKE Phase 1 Aggressive Mode exchange does not complete

Posted: 19 May 2022 08:03 AM PDT

I've configured a 3G IP Gateway of mine to connect using IKE Phase 1 Aggressive Mode with PSK to my openswan installation running on Ubuntu server 12.04. I've configured openswan as follows:

/etc/ipsec.conf:

version 2.0  config setup      nat_traversal=yes      virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12      oe=off      protostack=netkey    conn net-to-net      authby=secret      left=192.168.0.11      leftid=@left.paxcoda.com      leftsubnet=10.1.0.0/16      leftsourceip=10.1.0.1      right=%any      rightid=@right.paxcoda.com      rightsubnet=192.168.127.0/24      rightsourceip=192.168.127.254      aggrmode=yes      ike=aes128-md5;modp1536      auto=add  

/etc/ipsec.secrets:

@left.paxcoda.com @right.paxcoda.com: PSK "testpassword"  

Note that both left and right are NAT'd, with dynamic public IP's. My left ISP gives my router a public IP, but my right ISP gives me a shared dynamic public IP and dynamic private IP. I have dynamic dns for the public ip on the left side. Here is what I see when I sniff the ISAKMP protocol:

21:17:31.228715 IP (tos 0x0, ttl 235, id 43639, offset 0, flags [none], proto UDP (17), length 437)      74.198.87.93.49604 > 192.168.0.11.isakmp: [udp sum ok] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->0000000000000000: phase 1 I agg:      (sa: doi=ipsec situation=identity          (p: #1 protoid=isakmp transform=1              (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180))))      (ke: key len=192)      (nonce: n len=16  data=(da31a7896e2a19582b33...0000001462b01880674b3739630ca7558cec8a89))      (id: idtype=FQDN protoid=0 port=0 len=17 right.paxcoda.com)      (vid: len=16)      (vid: len=16)      (vid: len=16)      (vid: len=16)  21:17:31.236720 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 456)      192.168.0.11.isakmp > 74.198.87.93.49604: [bad udp cksum 0x649c -> 0xcd2f!] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->5b9776d4ea8b61b7: phase 1 R agg:      (sa: doi=ipsec situation=identity          (p: #1 protoid=isakmp transform=1              (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180))))      (ke: key len=192)      (nonce: n len=16  data=(32ccefcb793afb368975...000000144a131c81070358455c5728f20e95452f))      (id: idtype=FQDN protoid=0 port=0 len=16 left.paxcoda.com)      (hash: len=16)      (vid: len=16)      (pay20)      (pay20)      (vid: len=16)  

However, my 3G Gateway (on the right) doesn't respond, and I don't know why. I think left's response is indeed getting through to my gateway, because in another question, I was trying to set up a similar scenario with Main Mode IKE, and in that case it looks as though at least one of the three 2-way main mode exchanges succeeded.

What other explanation for the failure is there?

(The 3G Gateway I'm using on the right is a Moxa G3150, by the way.)

How to return multiline from remote SSH command

Posted: 19 May 2022 07:14 AM PDT

I have a script that backs-up remote systems, and want it to display disk space on the remote backup device prior and post running backup script.

Thanks to another post have learnt how to run remote commands via SSH such as (SSH keys have been setup for auto login).

echo `ssh -t user@host uname -a`  

However, how can I get a multi line response that comes from a command such as

echo `ssh -t user@host df`  

Response just shows the last line of output from df

How can I mount a remote volume with 777 permissions for all users?

Posted: 19 May 2022 05:14 AM PDT

I want users to be able to upload files to a central file server via my PHP script. I mounted the file server's shared volume using this command:

sudo mount -t cifs //192.168.1.8/share local_dir -o username=user,password=pass  

Whilst I could sudo chmod my way to write access, there are hundreds of directories which already exist:

drwxr-xr-x 1 root root    0 2011-03-30 15:59 dir1  drwxr-xr-x 1 root root    0 2011-04-04 16:27 dir2  drwxr-xr-x 1 root root    0 2011-04-04 18:07 dir3  drwxr-xr-x 1 root root    0 2011-04-06 13:41 dir4  drwxr-xr-x 1 root root    0 2011-04-06 13:39 dir5  ....etc  

I may need to create a new directory or move the uploaded file to an existing directory.

Is there anything I can do to make this share writeable by any user? The only other solution I can think of is to have Apache run as root. I won't be doing that.

What are the functional differences between .profile .bash_profile and .bashrc

Posted: 19 May 2022 08:14 AM PDT

What are the functional differences between the .profile, .bash_profile and .bashrc files?

No comments:

Post a Comment