Wednesday, November 17, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


USB3 needed for RPi based NAS? [migrated]

Posted: 17 Nov 2021 06:38 AM PST

I am wanting to build a RPi NAS, following more or less the following instructions: https://magpi.raspberrypi.org/articles/build-a-raspberry-pi-nas

I have the two USB drives (2 TB each), but I only own a RPi 3 - no USB3 ports. However, I would have to attach my home-grown NAS to the WiFi modem, in order to have it usable anywhere in my house.

This limits of course the bandwidth with which I'd be able to access the NAS. So the question is - should I invest in a RPi 4 for this project? Wouldn't the extra bandwidth gained by USB 3 lost on the WiFi communication? I would attache the NAS via cable to the router though.

I know I can make some theoretical calculations based on the WiFi specs, but I am interested in real experience and real expectations on performance.

Thanks

Creating a VPN for communicating two different boards

Posted: 17 Nov 2021 06:28 AM PST

Beforehand, I would like to say that I am not experienced in networking and would like to learn more regarding this.

I have two boards that have to send and receive ethernet packets to each other. Let's call them board 1 and board 2. Board 1 is connected to Ubuntu 1 and Board 2 is connected to Ubuntu 2. Both Ubuntu 1 and 2 are connected to a bigger network. The diagram below shows the topology of the network.

I want board 1 and 2 to be able to receive and send ethernet packets through Ubuntu 1 and 2. If possible, in layer 2 (Data Link Layer) not layer 3 (Network Layer). I have read TUN/TAP interfaces, but am still puzzled on how to implement this into my network. I have also read IP Forwarding (routing) but this uses layer 3. I want these boards to communicate through layer 2.

My final solution was creating a VPN server in Ubuntu 1 or 2 using OpenVPN. This also uses the TUN interface (layer 3), but I am hoping it can be replaced with a TAP interface. This is my current question. How do i switch to a TAP interface in OpenVPN?

If there are easier suggestions, instead of creating a VPN an such, I would love to hear it also. Thank you in advance for your help.

Network Topology

BL460c Gen9 "disconnected NIC", not connect until I reset OA

Posted: 17 Nov 2021 06:26 AM PST

I have 3 C7000 blades enclosures that we used with BL460c Gen8 without any problems, but we are changing them for BL460c Gen9 and some blades don't connect it network until I reset OA.

All blades are using 536FLB FlexibleLOM with the last SPP (May 2021), I've tested with 3 OA with different version, 4.97, 4.90 and 4.60.

Also, I used HP 6125G switches and 1GbE Pass Through for testing.

Moving the blade alongside the enclosure can or cannot reproduce the problem, it's random. Exchange the FLB between blades can or cannot reproduce the problem. Exchange the blade between enclosures, can or cannot reproduce the problem.

I'm really confused about this problem.

All blades are updated, all have been reset to manufacture configuration, all servers are from different distributors, I think it's not the problem.

Linux server replication tools

Posted: 17 Nov 2021 07:08 AM PST

we are looking for tools or advices on how to handle server replication/mirroring. we have a software deployed on premise in linux servers, the clients would like to have a replica of the software to make sure that the system works fine and can be accessed even if one of the machines is down, we have done some scripts to handle these cases on our own but it seems to be prone to errors we faced some issues with WebSockets

Edit: we have an ubuntu VM running a monitoring solution with the following services: NGINX: serving a web application nodeJS: as a backend service (REST API's , Websocket) MYSQL/MongoDB: main/secondary database Python: monitoring tasks

Requirements: we need to replicate all of that for a failover scenario once one of the servers is down the system should be accessible via the same IP address and the system should resume working with the available machine

Thanks in advance

How to make Titra docker image answer https?

Posted: 17 Nov 2021 05:44 AM PST

I've got a test installation of Titra on a local system, and I've got it answering http on port 80 with this docker-compose file:

version: "2.0"  services:    titra:      image: kromit/titra      container_name: titra      depends_on:        - mongodb      environment:        - ROOT_URL=http://timesheet        - MONGO_URL=mongodb://mongodb/titra        - PORT=3000      ports:        - "80:3000"      restart: always    mongodb:      image: mongo:4.4      container_name: mongodb      restart: always      volumes:        - /root/titradb:/data/db  

That works, but I'd kind of like the thing to answer https instead, but I'm not that familiar with Titra itself, nor Meteor (the framework it's written in), and my poking around the available documentation hasn't turned up anything about https for self-hosted Titra instances.

Deny direct IP access to an application deployed in Kubernetes

Posted: 17 Nov 2021 07:05 AM PST

I have a NodeJS application with express.js as a backend framework deployed on cloud using Kubernetes. The K8s runs on top of an Ubuntu template. The application deployed in Kubernetes is of service type NodePort. This means the app uses the external IP address of the K8s nodes. In my case, it's currently using an external IP address of one of the master nodes.

I then assigned a DNS hostname for the application using Cloudflare Tunnel (aka Argo Tunnel). It works perfectly fine as I can access the application from outside the K8s cluster with the DNS hostname that was resolved. However, I can also access the application directly from a.b.c.d: 31130. Here is a snippet from config.yml file used in creating Cloudflare tunnel:

tunnel: ***********8ab68bscjbi9cddhujhdhbh  credentials-file: /home/sebastian/.cloudflared/***********8ab68bscjbi9cddhujhdhbh.json    ingress:    - hostname: myapp.test.io      service: http://a.b.c.d:31130    - service: http_status:404  

My concern here is, how to deny or block direct IP access to the application as I do not wish to reveal the IP address and make life hard for myself from the security standpoint?

Does this have to be configured from within Cloudflare or K8s cluster is also my doubt. Any feedbacks and suggestion would be appreciated.

iptables: modify output flow

Posted: 17 Nov 2021 05:29 AM PST

When i trace some raw output packets from a specific application, i get the following output where a packets destination address is magically changed from 10.10.20.20 to 127.1.1.1. Is there any way of bypassing this by getting the raw packet "as is" to the output?

trace id fd9543bc ip raw OUTPUT packet: oif "br0" ip saddr 10.10.10.10 ip daddr 10.10.20.20 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 26448 ip length 60 tcp sport 34188 tcp dport 80 tcp flags == syn tcp window 64240  trace id fd9543bc ip raw OUTPUT rule meta l4proto tcp ip daddr 10.10.20.20 counter packets 52 bytes 4540 meta nftrace set 1 (verdict continue)  trace id fd9543bc ip raw OUTPUT verdict continue  trace id fd9543bc ip raw OUTPUT policy accept  trace id fd9543bc ip filter OUTPUT packet: oif "br0" ip saddr 10.10.10.10 ip daddr 127.1.1.1 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 26448 ip length 60 tcp sport 34188 tcp dport 8080 tcp flags == syn tcp window 64240  trace id fd9543bc ip filter OUTPUT verdict continue  trace id fd9543bc ip filter OUTPUT policy accept  trace id fd9543bc inet filter output packet: oif "br0" ip saddr 10.10.10.10 ip daddr 127.1.1.1 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 26448 ip protocol tcp ip length 60 tcp sport 34188 tcp dport 8080 tcp flags == syn tcp window 64240  trace id fd9543bc inet filter output verdict continue  trace id fd9543bc inet filter output policy accept  

Using URN's with special characters in nginx maps

Posted: 17 Nov 2021 05:44 AM PST

When using nginx and maps it is possible to rewrite mutiple URN's with a map file. What is problematic is when the URN contains special characters. I have been breaking my head trying to get this right, and hope this Question / Solution might save others from becoming gray hair.

Let's set the scenario.

A Linux server (Debian/Ubuntu) running standard nginx. DNS pointing to this server that resolves to a server config. A Map that contains no duplicate entries with incoming and outgoing URN's (resolvable)

The map setup would contain the following:

map $host$request_urn $rewrite_urn {      include /<path to file filename>;  }  

the map file itself contains one entry per line terminated with a semicolon.

example.com/Böhme https://anotherexample.org/SomeWeirdPath/Böhme;  

The server config for this mapping to work

server {      listen 443 ssl http2;      ssl_certificate /<absolute path to crt file>;      ssl_certificate_key /<absolute path to key file>;      server_name example.com;      proxy_set_header X-Forwarded-For $remote_addr;      ssl_protocols TLSv1.2 TLSv1.3;      ssl_ciphers HIGH:!aNULL:!MD5;      ssl_dhparam <absolute path to Diffie Hellman key>;      add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";      server_tokens off;      if ($rewrite_urn) {              rewrite ^ $rewrite_urn redirect;      }      rewrite ^ <default URL> redirect;  }  

I have simplified the config of this server config so we can concentrate on the map settings. The config assume that the domain will be using SSL and the certificate is valid. The if statement will only execute if the $host$request_urn is in the list with a $rewrite_urn, otherwise the last rewrite will be executed.

The Question

How do I transform the $request_urn so that nginx understand it correctly? The map file contains the value in UTF8, but it seems that nginx wants the $request_urn URL-Encoded and in Hexadecimal.

$request_urn as in the mapfile

example.com/Böhme

$request_urn URLEncoded as per Browser

example.com/B%C3%B6hme

$request_urn as I think nginx wants it

example.com/B\xC3\xB6hme

I can't seem to find a system package that has this feature, but I think I am starting to re-invent the wheel here.

I would need to:

create a function that will URL encoding the list, as per How to decode URL-encoded string in shell?

function urldecode() { local i="${*//+/ }"; echo -e "${i//%/\\x}"; }  

and then use Octal dump as per Convert string to hexadecimal on command line, so the map bucket is created in memory with the correct values for the if statement test.

It's starting to feel like rocket science, and I can't believe that nobody else hasn't solved this problem before, I just can't seem to find a solution.

What are use cases for getting an ipv6 /64 subent per server

Posted: 17 Nov 2021 05:17 AM PST

I am somewhat new to the whole networking topic and am trying to understand why certain things are the way they are.

Right now I am struggling to understand why you get a whole /64 ipv6 subnet for each server when renting one. Is it because the are just enough addresses anyway and we might as well assign them? Or are there actual use cases? I find it hard to imagine that a single server could make use of that many addresses anyway. Would it then not be better to allow for more subets in the first place?

I know that there is an absurd amount of ipv6 addresses available, so wasting them is not really a concern. But on the other hand I think giving a /64 to each server is effectively cutting the total amount in half, which seems strange considering that ipv4 has been thought to be enough as well.

Thanks!

limit memory usage for each php-fpm pool

Posted: 17 Nov 2021 05:14 AM PST

A php-fpm config can limit a script to certain memory usage using memory_limit key. However, it only limits per script execution. What a solution that can limit memory usage for each php-fpm pool.

Moving docker container from linux based server to AWS

Posted: 17 Nov 2021 05:38 AM PST

I am trying to move a container running on a linux based server to AWS. I first created a tar file of the container using the following commands:

docker commit -p <container_id> <some_name> docker save -o tar_file.tar <some_name>  

I then move this tar file to AWS server and run the following commands:

docker load -i tar_file.tar  

After running this, a docker image is created. I then ran this docker image using command

docker run <image_name>  

But I get the following error

The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

standard_init_linux.go:228: exec user process caused: exec format error

I tried running the docker image using docker run --platform linux/amd64 <image_name>

Even that didn't resolve the issue. Got the same error

Downloading a file using Windows CMD line with curl/wget

Posted: 17 Nov 2021 05:12 AM PST

I have a client [Windows 10 VM] and a server [say a linux based VM].

I have Apache running on the Linux Server.

I have a file on the linux server that I want to download on my windows client.

I want to do it in 2 ways from the windows CMD: -Using curl -using wget

I tried the foll commands on my windows CMD. But doesnt work. Is something wrong with my CLI?

curl http://x.x.x.x/home/abc/ -O test.zip  wget http://x.x.x.x/home/abc/ -O test.zip  

Edit:: Insense, I wanted to understand the right CLI syntax to do a wget/curl to fetch a file from a certain directory on the remote server (/home/abc)

Nginx returns 415 when using image_filter with webp

Posted: 17 Nov 2021 06:52 AM PST

I have some jpg/png files that get resized in a location (with image_filter module) and it's working fine. But, I have also a webp version of some images and I want to serve the webp one if it exists. If not, the original jpg/png image should be served.

I'm using the following configuration:

map $http_accept $webp_suffix {      default        "";      "~image/webp"  "webp";  }    location ~ "/@s/(.*)(png|jpe?g)" {      alias                       $BASE_PATH/$1;      try_files                   $webp_suffix $2 $uri;        image_filter                resize 1200 -;      image_filter_jpeg_quality   80;      image_filter_buffer         10M;  }  

But nginx returns a 415 Unsupported Media Type error when the webp version is found. If the webp file is missing, it serves the jpg/png file without any error. The Nginx version is 1.16.1.

Nema 5-20 Female to Nema 5-15 Male Power Adapter for UPS: safe?

Posted: 17 Nov 2021 06:35 AM PST

I bought a large UPS for my server and didn't realize it comes with a NEMA 5-20 plug. We're in a residential setting and don't have those outlets. I see Nema 5-15/20 Female to Nema 5-15 Male Power Adapters but it seems not safe to me, if the device is expecting a dedicated 20 amp circuit. This is the UPS: https://www.cdw.com/product/cyberpower-smart-app-online-ups-series-ol2200rtxl2u-ups-1.8-kw-2200-v/3059881?pfm=srh is it safe to use an adapter and plug it into a residential circuit?

X-Matching-Connectors exceeded allowed maximum

Posted: 17 Nov 2021 05:15 AM PST

When sending some mails from Postfix to Outlook365 i receive an error:

Nov  1 08:00:00 mail postfix/smtp[16252]: B7E8079FA8F: to=<somemail.dk>, relay=somemail.mail.protection.outlook.com[104.47.7.138]:25, delay=0.71, delays=0.06/0/0.1/0.55, dsn=5.6.211, status=bounced (host somemail.mail.protection.outlook.com[104.47.7.138] said: 554 5.6.211 Invalid MIME Content: Single text value size (32784) exceeded allowed maximum (32768) for the 'X-Matching-Connectors' header. [FR3P281MB0970.DEUP281.PROD.OUTLOOK.COM] [AM6P192CA0016.EURP192.PROD.OUTLOOK.COM] [BE0DEU01FT017.eop-deu01.prod.protection.outlook.com] (in reply to end of DATA command))  

To avoid this i have tried to strip all X-Matching-Connectors from my mails, but the this does not solve the problem, a matter a fact it seems like the outgoing mails does not have this header at all (i use postfix header_checks to remove another header just to make sure it works, and i can se this is removed in the log).

I also cannot find any info on the X-Matching-Connectors anywhere. Anyone know what it is and maybe where it is added?

How can I solve this problem?

Only found this online: https://answers.microsoft.com/en-us/msoffice/forum/all/getting-ndr-from-some-servers-headers-too-large/a3ace969-9d08-4d07-967a-5f40f9a0bad7

UPDATE == 5-11 ==

I have tried to set header_checks up to log ALL headers in the outgoing mail, and the offending X-Matching-Connectors is not send from Postfix to Outlook. Maybe its a header being generated in the Microsoft mailserver?

Further info : Our Postfix server is also on a Linode server (as M Klein, below). But running as a standard mailserver.

Answer to comments:

Yes, the Postfix mailserver has worked for years without this problem, and can send to gmail and other servers without issues.

Yes, I can send to the receiver from fx gmail without issues.

No, it does not seem to be all email to office365 which have this issue, only some recipients/domain. But its all mails send to these domains.

Related info:

https://social.technet.microsoft.com/Forums/office/de-DE/8d08697c-c0fc-449f-88ca-c92c4e75b3d3/fehler-beim-senden-an-office-365-server?forum=office_generalde

https://www.linode.com/community/questions/22063/anybody-having-issues-sending-mail-to-exchange-online-domains-from-european-loca

How to configure DNS for Services and Pods in Kubernetes?

Posted: 17 Nov 2021 05:44 AM PST

I have been going through the K8s documentation on DNS for Services and Pods. The main task that I want to resolve is my K8s deployment has NodePort as service type. Meaning, I use the external IP addresses from the nodes to expose the service to the Internet. When I do this, my IP address is getting exposed and would rather prefer to have a hostname [ a DNS name]. Going through the documentation linked above, I do not understand much of the concepts owing to that fact that I'm new to K8s.

I have set-up Ingress Controller from NGINX for Bare Metal K8s because my cloud provider does not provide load balancing service.

So my question is: How do I set-up an ExternalDNS in my K8s cluster?

For reference purposes, these are my resources inside the K8s cluster.

Namespaces  NAME              STATUS   AGE  default           Active   3d12h  ingress-nginx     Active   5h53m  kube-node-lease   Active   3d12h  kube-public       Active   3d12h  kube-system       Active   3d12h  

Basically, I have all my deployments inside the default namespace.

kubectl get all -n default     NAME                               READY   STATUS    RESTARTS   AGE  pod/hello-docker-cc749b757-qfctr   1/1     Running   0          70m    NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE  service/hello-docker   NodePort    10.xxx.xxx.xxx   <none>        3000:30072/TCP   70m  service/kubernetes     ClusterIP   10.xxx.xxx.xxx   <none>        443/TCP          3d12h    NAME                           READY   UP-TO-DATE   AVAILABLE   AGE  deployment.apps/hello-docker   1/1     1            1           70m    NAME                                     DESIRED   CURRENT   READY   AGE  replicaset.apps/hello-docker-cc749b757   1         1         1       70m  

And this is the manifest file I have for service and deployment of hello-docker app:

apiVersion: v1  kind: Service  metadata:    name: hello-docker    labels:        app: hello-docker  spec:    type: NodePort   ports:   - port: 3000     targetPort: 8000     protocol: TCP     name: http   selector:     app: hello-docker    ---  apiVersion: apps/v1  kind: Deployment  metadata:    name: hello-docker    labels:      app: hello-docker  spec:    replicas: 1    selector:      matchLabels:        app: hello-docker    template:      metadata:        labels:          app: hello-docker      spec:        imagePullSecrets:        - name: regcred        containers:        - name: hello-docker          image: sebastian/hello-docker:1.1          imagePullPolicy: Always          ports:            - containerPort: 8000   

Any feedbacks and suggestions would be highly appreciated.

Cannot build any functions with cloud function

Posted: 17 Nov 2021 05:46 AM PST

Somehow I keep getting build failures in our new cloud environment. I tried with nodejs and python default HelloWorld cloud functions, both failed with no specific error messages. Below is one of the ERRORs:

ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: {"error":{"buildpackId":"","buildpackVersion":"","errorType":"OK","canonicalCode":"OK","errorId":"","errorMessage":""},"stats":[{"buildpackId":"google.utils.archive-source","buildpackVersion":"0.0.1","totalDurationMs":45,"userDurationMs":41},{"buildpackId":"google.python.functions-framework","buildpackVersion":"0.9.5","totalDurationMs":78,"userDurationMs":78},{"buildpackId":"google.python.pip","buildpackVersion":"0.9.2","totalDurationMs":5190,"userDurationMs":5186},{"buildpackId":"google.utils.label","buildpackVersion":"0.0.1","totalDurationMs":0,"userDurationMs":0}],"warnings":null}  

Remote work with windows rdp

Posted: 17 Nov 2021 05:31 AM PST

We have 20 Windows XP pc in a Windows 2003 domain controler/ActiveDirecory. In the same domain we have a Windows 2016 "large" physical server (most of the time not in use). Because we have an application that requires IE6 we have stuck with Windows XP. Due to the situation that has arisen with the COVID, users have to work from home. So the most traditional solution is remote access to the desktop. For this reason we used the VPN service of Windows 2003 domain controller to connect the user's home pc to the corporate domain and then rdp to the desired PC. Is there a better solution? I have read that Windows 2016 has many and incredible features about remote work but I have been told that it is not possible to use it for this purpose because it is not the domain controller. Is that true?

changing netmask of loopback interface

Posted: 17 Nov 2021 05:16 AM PST

I can change netmask of loopback inteface (usually lo interface has 127.0.0.1/8):

pi@raspberrypi:~ $ ifconfig  lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536      inet 127.0.0.1  netmask 255.0.0.0      inet6 ::1  prefixlen 128  scopeid 0x10<host>      loop  txqueuelen 1000  (Local Loopback)    pi@raspberrypi:~ $ sudo ifconfig lo 127.0.0.1 netmask 255.255.255.0 up    pi@raspberrypi:~ $ ifconfig  lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536      inet 127.0.0.1  netmask 255.255.255.0      inet6 ::1  prefixlen 128  scopeid 0x10<host>      loop  txqueuelen 1000  (Local Loopback)  

What negative effects can this have? What pitfalls it hides?

Why does Samba4 fail with NT_STATUS_INTERNAL_ERROR on Ubuntu 18.04

Posted: 17 Nov 2021 07:02 AM PST

I am having trouble setting up Samba as an AD DC. At present I have 1 Ubuntu box which I'd like to use to share files with other computers in my home network. At present the same machine which servers as the DC would also serve the files.

This is a home setup, meaning that I am using a consumer-grade router.

  • OS: Ubuntu 18.04
  • Samba: Version 4.7.6-Ubuntu

To begin each iteration of my attempts to get it working I perform the recommended steps to kill any samba processes etc and remove the files discussed in Preparing the Installation from the setup guide https://wiki.samba.org/index.php/Setting_up_Samba_as_an_Active_Directory_Domain_Controller

$ ps ax | egrep "samba|smbd|nmbd|winbindd"  

I then kill all processes as described.

I verify that samba is installed

$ which samba  /usr/sbin/samba  $ samba --version  Version 4.7.6-Ubuntu  

The instructions also read

Verify that the /etc/hosts file on the DC correctly resolves the fully-qualified domain name (FQDN) and short host name to the LAN IP address of the DC. For example:

The exact contents are

127.0.0.1       localhost localhost.localdomain  192.168.1.1     DC1.samdom.example.com DC1      # The following lines are desirable for IPv6 capable hosts  ::1     ip6-localhost ip6-loopback  fe00::0 ip6-localnet  ff00::0 ip6-mcastprefix  ff02::1 ip6-allnodes  ff02::2 ip6-allrouters  

Note that the Ubuntu box's IP on the local network is 192.168.1.20 192.168.1.1 is the LAN IP found on my router under the LAN tab and IP Address field. Note I do not have DDNS turned on for my router.

$ sudo samba-tool domain provision --use-rfc2307 --interactive  Realm: SAMDOM.EXAMPLE.COM   Domain [SAMDOM]: SAMDOM   Server Role (dc, member, standalone) [dc]: dc   DNS backend (SAMBA_INTERNAL, BIND9_FLATFILE, BIND9_DLZ, NONE) [SAMBA_INTERNAL]: SAMBA_INTERNAL   DNS forwarder IP address (write 'none' to disable forwarding) [192.168.1.1]: 8.8.8.8  Administrator password:  Retype password:  Looking up IPv4 addresses  Looking up IPv6 addresses  No IPv6 address will be assigned  Setting up share.ldb  Setting up secrets.ldb  Setting up the registry  Setting up the privileges database  Setting up idmap db  Setting up SAM db  Setting up sam.ldb partitions and settings  Setting up sam.ldb rootDSE  Pre-loading the Samba 4 and AD schema  Adding DomainDN: DC=samdom,DC=example,DC=com  Adding configuration container  Setting up sam.ldb schema  Setting up sam.ldb configuration data  Setting up display specifiers  Modifying display specifiers  Adding users container  Modifying users container  Adding computers container  Modifying computers container  Setting up sam.ldb data  Setting up well known security principals  Setting up sam.ldb users and groups  Setting up self join  Adding DNS accounts  Creating CN=MicrosoftDNS,CN=System,DC=samdom,DC=example,DC=com  Creating DomainDnsZones and ForestDnsZones partitions  Populating DomainDnsZones and ForestDnsZones partitions  Setting up sam.ldb rootDSE marking as synchronized  Fixing provision GUIDs  A Kerberos configuration suitable for Samba AD has been generated at /var/lib/samba/private/krb5.conf  Setting up fake yp server settings  Once the above files are installed, your Samba AD server will be ready to use  Server Role:           active directory domain controller  Hostname:              zoo-vault  NetBIOS Domain:        SAMDOM  DNS Domain:            samdom.example.com  DOMAIN SID:            …  

Great, so far so good. I copy the krb5.conf file as suggested to /etc/krb5.conf.

I skip the Setting up the AD DNS back end as I am using SAMBA_INTERNAL

My /etc/resolv.conf looks like

# Generated by NetworkManager  search samdom.example.com  nameserver 192.168.1.1  

I skip Create a reverse zone and then copy the kerberos file as suggested.

This is where it goes wrong. I've started samba with sudo samba, the processes look like they are running but any of the following verification commands given in the documentation do not work.

$ smbclient //localhost/netlogon -UAdministrator -c 'ls'  Enter SAMDOM\Administrator's password:  session setup failed: NT_STATUS_INTERNAL_ERROR  $ host -t SRV _ldap._tcp.samdom.example.com.  Host _ldap._tcp.samdom.example.com. not found: 3(NXDOMAIN)  $ host -t SRV _kerberos._udp.samdom.example.com.  Host _kerberos._udp.samdom.example.com. not found: 3(NXDOMAIN)  $ host -t A dc1.samdom.example.com.  Host dc1.samdom.example.com. not found: 3(NXDOMAIN)  

I'm at somewhat of a loss here.

A few things to note. My ubuntu machine's static IP on my local network is NOT 192.168.1.1 (the IP I used in the config steps above). It is 192.168.1.20. I've tried using that IP as well, to no avail.

I have also tried using none, 192.168.1.1 and 8.8.8.8 as the DNS forwarder IP address during setup to no avail.

I have found some articles online variously offering solutions or further test functions but have yet to find anything that solves my problem.

In the end I would like to set up Samba to function as a

How to run clamd by systemd as daemon on Centos 7

Posted: 17 Nov 2021 06:04 AM PST

# rpm -q centos-release  centos-release-7-5.1804.el7.centos.2.x86_64    # rpm -qa clam  clamav-filesystem-0.100.0-2.el7.noarch  clamav-data-0.100.0-2.el7.noarch  clamav-lib-0.100.0-2.el7.x86_64  clamav-update-0.100.0-2.el7.x86_64  clamav-server-systemd-0.100.0-2.el7.x86_64  clamav-devel-0.100.0-2.el7.x86_64  clamav-scanner-systemd-0.100.0-2.el7.x86_64  clamd-0.100.0-2.el7.x86_64  clamav-0.100.0-2.el7.x86_64  

Below clamd@.service as is.

# cat /usr/lib/systemd/system/clamd\@.service  [Unit]  Description = clamd scanner (%i) daemon  After = syslog.target nss-lookup.target network.target    [Service]  Type = forking  ExecStart = /usr/sbin/clamd -c /etc/clamd.d/%i.conf  Restart = on-failure  

I use the standard config with defaults settings

# /etc/clamd.d/mail.conf  LogSyslog yes  TCPSocket 3310  TCPAddr 127.0.0.1  User clamscan  

start

After start no any errors

# systemctl start clamd@mail    09:02:35 -- clamd[3644]: Limits: Global size limit set to 104857600 bytes.  09:03:39 -- clamd[3664]: Received 0 file descriptor(s) from systemd.  09:03:39 -- clamd[3664]: clamd daemon 0.100.0 (OS: linux-gnu, ARCH: x86_64, CPU: x86_64)  09:03:39 -- clamd[3664]: Running as user clamscan (UID 992, GID 989)  09:03:39 -- clamd[3664]: Log file size limited to 1048576 bytes.  09:03:39 -- clamd[3664]: Reading databases from /var/lib/clamav  09:03:39 -- clamd[3664]: Not loading PUA signatures.  09:03:39 -- clamd[3664]: Bytecode: Security mode set to "TrustSigned".  09:04:01 -- clamd[3664]: Loaded 6575820 signatures.  09:04:08 -- clamd[3664]: TCP: Bound to [127.0.0.1]:3310  09:04:08 -- clamd[3664]: TCP: Setting connection queue length to 200  

status

# systemctl status clamd@mail  ● clamd@rspamd.service - clamd scanner (rspamd) daemon  Loaded: loaded (/usr/lib/systemd/system/clamd@.service; static; vendor preset: disabled)  Active: inactive (dead)    # lsof -i | grep 3310  Empty  

Looks like the service was not run as daemon. I tried to edit the /usr/lib/systemd/system/clamd@.service, but got no expected result (it always restart)

[Service]  Type = simple  ExecStart = /usr/sbin/clamd -c /etc/clamd.d/%i.conf --foreground=yes  Restart = on-failure  

getting SSI's `exec` to work with apache

Posted: 17 Nov 2021 06:04 AM PST

so i have an apache-2.4.25 (as packaged in Debian/stretch), and would like to use SSI's exec method.

<!--#exec cmd="ls" -->  

Unfortunately this gives me an error:

[an error occurred while processing this directive]  

In the logfiles it says

unknown directive "exec" in parsed doc /path/to/some/user/public_html/ssitest/index.shtml  

which I tracked down to having Options +IncludesNOEXEC enabled in my userdir.conf (which disables the exec directive for SSI). So I tried to turn that option off for a single specific VHost, by putting the following into the VirtualHost section:

Options -IncludesNOEXEC  Options +Includes  

Unfortunately this doesn't help.

So I tried with in the Directory section, but still no luck:

Alias /ssitest/ /path/to/some/user/public_html/ssitest/  Options -IncludesNOEXEC  Options +Includes  <Directory /path/to/some/user/public_html/ssitest/>    Options -IncludesNOEXEC    Options +Includes  </Directory>  

Whenever I try to access my page, I get the an error occurred while processing this directive error.

So is there a way to disable a previously set option?

Is there a way to install SSL certificate on EC2 instances running IIS using ebextentions?

Posted: 17 Nov 2021 07:02 AM PST

I'm trying to find a simple way to install SSL on EC2 instances running IIS without having to RDP into each server. Can it also be configured to add and install the cert upon spinning up instances during auto scaling? I have been looking around for a while, but could not find a simple way to do this.

Reset subscription or fix web app

Posted: 17 Nov 2021 06:04 AM PST

I'm trying to set up a web app, but I keep on getting errors.

If I try in the portal I keep on seeing that the status is "deleted" and the deployment failed because application insights is not supported in my region.

I do not need application insights.

In Visual Studio I get the following error

--------------------------- Microsoft Visual Studio --------------------------- Following errors occured during the deployment:

Error during deployment for resource 'AppInsightsComponents MySite' in resource group 'MegaSale': MissingRegistrationForLocation: The subscription is not registered for the resource type 'components' in the location 'Central US'. Please re-register for this provider in order to have access to this location..

Error during deployment for resource 'MySite' in resource group 'MegaSale': NoRegisteredProviderFound: No registered resource provider found for location 'West Europe' and API version '2.0' for type 'servers'. The supported api-versions are '2014-01-01, 2014-04-01, 2014-04-01-preview'. The supported locations are 'centralus, eastus, westus, southcentralus, eastus2, northcentralus, eastasia, southeastasia, japanwest, japaneast, northeurope, westeurope, brazilsouth, australiaeast, australiasoutheast, centralindia, westindia, southindia, canadacentral, canadaeast, westus2, westcentralus, uksouth, ukwest'..

and this occurs no matter which region I choose.

I would like to use Western Europe, but can accept a different region if it would just work.

I don't mind scraping my whole subscription and starting anew, though I'd rather not if possible.

The resource group I certainly don't mind trashing totally.

Can a database created in MS SQL Server Express be exported to MS SQL standard?

Posted: 17 Nov 2021 05:58 AM PST

As the title states, can a databse created in MS SQL Server Express be exported to MS SQL standard?

For the inent of the question, the version in question is 2008.

How to change time source from "Local CMOS Clock" to "DC"

Posted: 17 Nov 2021 06:19 AM PST

In a domain, I want to set DC as time server.

To do that I use this command:

w32tm /config /manualpeerlist:europe.pool.ntp.org /syncfromflags:manual /reliable:yes /update  

and

w32tm /resync /rediscover  

In the client servers I use

net time \\<comp.name.of.ad> /set /y   

but some of the clients still use Local CMS Clock.

What can I do?

Thanks in advance.

Edit:

I also run

w32tm /resync [/computer:<computer>] [/nowait] [/rediscover]  

on client end but the time server is still Local CMOS Clock for the client.

On the AD source is what I set. (nist.expertssmi.com)

On the clientend, source is Local CMOS Clock

Forwarding ports to guests in libvirt / KVM

Posted: 17 Nov 2021 05:36 AM PST

How can I forward ports on a server running libvirt/KVM to specified ports on VM's, when using NAT?

For example, the host has a public IP of 1.2.3.4. I want to forward port 80 to 10.0.0.1 and port 22 to 10.0.0.2.

I assume I need to add iptables rules, but I'm not sure where is appropriate and what exactly should be specified.

Output of iptables -L

Chain INPUT (policy ACCEPT)  target     prot opt source               destination           ACCEPT     udp  --  anywhere             anywhere            udp dpt:domain   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:domain   ACCEPT     udp  --  anywhere             anywhere            udp dpt:bootps   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:bootps     Chain FORWARD (policy ACCEPT)  target     prot opt source               destination           ACCEPT     all  --  anywhere             10.0.0.0/24         state RELATED,ESTABLISHED   ACCEPT     all  --  10.0.0.0/24          anywhere              ACCEPT     all  --  anywhere             anywhere              REJECT     all  --  anywhere             anywhere            reject-with icmp-port-unreachable   REJECT     all  --  anywhere             anywhere            reject-with icmp-port-unreachable     Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination           

Output of ifconfig

eth0      Link encap:Ethernet  HWaddr 00:1b:fc:46:73:b9              inet addr:192.168.1.14  Bcast:192.168.1.255  Mask:255.255.255.0            inet6 addr: fe80::21b:fcff:fe46:73b9/64 Scope:Link            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1            RX packets:201 errors:0 dropped:0 overruns:0 frame:0            TX packets:85 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:1000             RX bytes:31161 (31.1 KB)  TX bytes:12090 (12.0 KB)            Interrupt:17     lo        Link encap:Local Loopback              inet addr:127.0.0.1  Mask:255.0.0.0            inet6 addr: ::1/128 Scope:Host            UP LOOPBACK RUNNING  MTU:16436  Metric:1            RX packets:0 errors:0 dropped:0 overruns:0 frame:0            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:0             RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)    virbr1    Link encap:Ethernet  HWaddr ca:70:d1:77:b2:48              inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0            inet6 addr: fe80::c870:d1ff:fe77:b248/64 Scope:Link            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1            RX packets:0 errors:0 dropped:0 overruns:0 frame:0            TX packets:6 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:0             RX bytes:0 (0.0 B)  TX bytes:468 (468.0 B)  

I'm using Ubuntu 10.04.

Ubuntu Apache: httpd.conf or apache2.conf?

Posted: 17 Nov 2021 07:03 AM PST

which one of these two files should I use to configure Apache?

The httpd.conf is empty, while apache2.conf is not.

It confuses me!

No comments:

Post a Comment