Friday, March 4, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Mixing server models in Hyper-V Failover Cluster

Posted: 04 Mar 2022 06:02 AM PST

Does anyone have experience with a heterogeneous blend of servers in a Hyper-V failover cluster? We have a cluster with blended generations of Proliants (DL360 G9s and DL360 G10s), and I'm considering introducing Dell servers into the mix due largely to availability and pricing. Is this a bad idea, and why?

How do you set a self-destruct or maximum uptime in AWS?

Posted: 04 Mar 2022 05:59 AM PST

Situation

We have a sandbox AWS account for trying things out. It is not for production, purely just for playing around with all the toys that AWS provide. We want to encourage everyone to explore and learn.

We have many AWS accounts in our estate, including but not limited to,

  • sandbox
  • development
  • test
  • production

Financial and environmental responsibility is important to us.

Requirement

Potential solutions

aws-nuke

I have seen aws-nuke. If we ran this at midnight on Wednesdays and Sundays it would terminate all instances. This sounds like a great solution, however it is also somewhat dangerous as it could terminate instances on other accounts my mistake. It also currently works on a block-nuke list, rather than an explicit allow-nuke list which is another potential security issue. I have logged aws-nuke#751 to address this.

Max uptime policy

The other method that I am looking into is to use a policy (IAM?) to set the maximum uptime for everything. I feel like this has less likelihood of leaking into our other accounts and also has the potential to be more nuanced. I'm not sure,

  • how best to implement this
  • whether it needs to be run in a lambda or can just be a policy
  • whether this is actually more secure than running aws-nuke across the estate.

I would be tremendously grateful for any pointers.

Varnish 4.1 - How to serve cached copy on backend fetch failed instead of 503

Posted: 04 Mar 2022 05:55 AM PST

I have a site served by apache+tomcat and a cache served by Varnish 4.1

When apache is down, varnish always returns a 503 error.
I would like varnish to return the copy of the pages it has in its cache but my attempts with ttl and grace have been unsuccessful.
I think I've read all the documentation on varnish 4.1 that I could find, any help is really appreciated.

Thanks in advance

Is it possible to restrict which object classes a dynamically linked auxiliary class can be added to when extending the AD schema?

Posted: 04 Mar 2022 05:41 AM PST

I've created a custom auxiliary class for the purpose of adding attributes to AD Group objects. I'm dynamically linking the auxiliary class to individual Groups. I can successfully add it to the objectClass of Group objects but I can also add it to other object types. I can't seem to find any clear documentation on how to restrict it to only Group objects. I've tried setting systemPossSuperiors/Possible Superior and it doesn't change the behavior.

It's not entirely clear that there is a way to restrict it but but other built in classes seem to demonstrate restrictions.

  1. Is it possible?
  2. If so, how?

Thanks.

Downsize the boot disk in VM Instance

Posted: 04 Mar 2022 05:39 AM PST

We are having VM instance with SSD disk of 950 GB with mongodb server running on it. We found that the disk can be of 60 GB hence was trying to resize disk.

Followed few posts and solutions but failed eventually to SSH. https://stackoverflow.com/questions/50731578/google-cloud-how-to-reduce-disk-size

Looking for help.

how to set LDAP ACL permissions on one subtree for a groups without modifying other objects permissions?

Posted: 04 Mar 2022 05:27 AM PST

I use commands like this to set permissions for some custom gorups/users.. now I want to set the permissions without overwriting them for all the other groups/users, what do I need to add to my ACL-Entrys?:

access to dn.subtree="cn=myContainer,dc=mydomain,dc=tld"      by set="user & [cn=myGroup,cn=groups,dc=mydomain,dc=tld]/uniqueMember*" write      by set="user & [cn=Domain Users,cn=groups,dc=mydomain,dc=tld]/uniqueMember*" read  

Postfix causes issues when its service is enabled using systemctl and doesn't launch on startup

Posted: 04 Mar 2022 04:41 AM PST

On a Rocky Linux version 8.5 machine (a bug-for-bug compatible Red Hat Enterprise Linux downstream), I have configured Postfix + Dovecot setup. After troubleshooting all configuration errors, I got to the point where both services would at least launch.

systemctl enable dovecot.service  systemctl enable postfix.service  

After restarting the machine, I could see Dovecot launched properly when queried using systemctl status dovecot. Postfix, on the other hand, failed to start, reporting:

[root@mail ~]# systemctl status postfix  ● postfix.service - Postfix Mail Transport Agent     Loaded: loaded (/usr/lib/systemd/system/postfix.service; enabled; vendor preset: disabled)     Active: failed (Result: exit-code) since ...; 12min ago    Process: 1419 ExecStart=/usr/sbin/postfix start (code=exited, status=1/FAILURE)    Process: 1396 ExecStartPre=/usr/libexec/postfix/chroot-update (code=exited, status=0/SUCCESS)    Process: 1364 ExecStartPre=/usr/libexec/postfix/aliasesdb (code=exited, status=0/SUCCESS)    systemd[1]: Starting Postfix Mail Transport Agent...  postfix/postfix-script[1506]: fatal: the Postfix mail system is already running  systemd[1]: postfix.service: Control process exited, code=exited status=1  systemd[1]: postfix.service: Failed with result 'exit-code'.  systemd[1]: Failed to start Postfix Mail Transport Agent.  

A quick check using postfix status showed indeed it is not running. Surprisingly though, postfix start then started the service without any issues. Querying postfix status then reported Postfix is happily running with a new PID. Querying systemctl status postfix one more time after that showed the unchanged error report from before.

The error reported makes no sense, however. I can systemctl disable postfix, restart the machine, check Postfix is truly not running using both systemctl status postfix and postfix status, try to enable it using systemctl start postfix and get the same error.

Furthermore, if I leave Postfix service disabled in systemd, reboot the machine and only start it with postfix start, the service kicks in, but systemctl status postfix reports it as loaded, inactive...

[root@mail ~]# postfix start  postfix/postfix-script: starting the Postfix mail system  [root@mail ~]# postfix status  postfix/postfix-script: the Postfix mail system is running: PID: 2169  [root@mail ~]# systemctl status postfix  ● postfix.service - Postfix Mail Transport Agent     Loaded: loaded (/usr/lib/systemd/system/postfix.service; disabled; vendor preset: disabled)     Active: inactive (dead)  [root@mail ~]#  

Why does Postfix on RHEL even come registered as a service when it categorically refuses to work as such? And what is then the proper way to ensure Postfix starts at boot?

Note: I tried chkconfig postfix on as I found it suggested by people online. That merely forwards the request to systemctl enable postfix.service which leads me back to the start.

... do I really have to hack it in using rc.local, when the contents of the file itself say it's there only for compatibility purposes, shouldn't be used anymore and I should consider working with systemd services?

High-availability strategy [closed]

Posted: 04 Mar 2022 04:30 AM PST

I have 4 servers, all Windows Server 2019. I need to setup them as 2 servers and 2 storages and run a File Server. The end goal is to connect them in a way that if a server and a storage goes down, everything works fine. How can I accomplish this in a Windows environment?

Routing WIFI and LAN on Mac OS

Posted: 04 Mar 2022 04:25 AM PST

An old post couldn´t help me. Things are like this:

I have one Network(managed from a router with access to the internet) All Mac´s are in that Network via Ethernet. In this Network is as well a NAS for Backups and some Files.

Additional i have Starlink Wifi (So not a network, only access to fast Wifi. That is how the starlink router work)

Now that i´m in the Starlink wifi, i cant connect to the NAS with the ethernet. So More or less i only want the Ethernet connection for NAS stuff and Wifi for default browsing etc. I tried a lot of stuff but couldn´t manage it.. Does someone have an idea?

Thanks in advance! Albert

Terraform Libvirt - How to use local qcow2 file

Posted: 04 Mar 2022 03:31 AM PST

i try to provision some nodes for a kubernetes cluster based on kvm and debian. I want to use the Debian 11 Genericcloud Image and clound-init to initialize it. So i put the debian baseimge to /var/lib/libvirt/images/templates on the remote machine, where kvm runs. I worked thorugh some tutorials and serverfault post and the say, I should handle it like this in my code:

resource "libvirt_volume" "diskimages" {    count = var.instance_count    name = "${var.instance_name}-${count.index}.qcow2"    pool = libvirt_pool.diskimage_pool.name    source = var.baseimage    format = "qcow2"  }  

where baseimage = "/var/lib/libvirt/images/templates/debian-11-genericcloud-amd64.qcow2". But when i execute this i get the following error:

Error: error while determining image type for /var/lib/libvirt/images/templates/debian-11-genericcloud-amd64.qcow2: error while opening /var/lib/libvirt/images/templates/debian-11-genericcloud-amd64.qcow2: open /var/lib/libvirt/images/templates/debian-11-genericcloud-amd64.qcow2: no such file or directory  │   │   with libvirt_volume.diskimages[4],  │   on libvirt.tf line 25, in resource "libvirt_volume" "diskimages":  │   25: resource "libvirt_volume" "diskimages" {  

Same when i try the solution from this serverfault post. Then my code looks like this:

# create .qcow2 image for vm  resource "libvirt_volume" "diskimages" {    count = var.instance_count    name = "${var.instance_name}-${count.index}.qcow2"    pool = libvirt_pool.diskimage_pool.name    source = "file///var/lib/libvirt/images/templates/debian-11-genericcloud-amd64.qcow2"    format = "qcow2"  }  

and i get the same error.

Does anyone have a clue whats going wrong here? Thanks in advance

How to install 32-bit libGL.so.1 on 64-bit Ubuntu 21.10

Posted: 04 Mar 2022 03:04 AM PST

I'd like to run 32-bit software on my 64-bit Ubuntu 21.10 and I got an error:

error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory  

It's because the library is 64-bit. So I tried to install 32-bit version of it but it doesn't work.

Firstly I've added i386 architecture to my Ubuntu

sudo dpkg --add-architecture i386  

and then I tried to install library:

sudo apt-get -y install libgl1:i386  Reading package lists... Done  Building dependency tree... Done  Reading state information... Done  Some packages could not be installed. This may mean that you have  requested an impossible situation or if you are using the unstable  distribution that some required packages have not yet been created  or been moved out of Incoming.  The following information may help to resolve the situation:    The following packages have unmet dependencies:   gdm3 : Depends: gnome-session but it is not going to be installed or                   x-session-manager or                   x-window-manager or                   x-terminal-emulator          Recommends: gnome-session but it is not going to be installed or                      x-session-manager          Recommends: xserver-xephyr but it is not going to be installed          Recommends: xserver-xorg but it is not going to be installed          Recommends: zenity but it is not going to be installed   gnome-session-bin : Depends: libegl1 but it is not going to be installed                       Depends: libgl1 but it is not going to be installed   gnome-shell : Depends: evolution-data-server (>= 3.33.1) but it is not going to be installed                 Depends: gir1.2-mutter-8 (>= 40.0) but it is not going to be installed                 Depends: gir1.2-webkit2-4.0 (>= 2.16.0) but it is not going to be installed                 Depends: libmutter-8-0 (>= 40.0) but it is not going to be installed                 Recommends: gnome-control-center (>= 1:3.25.2) but it is not going to be installed                 Recommends: gnome-user-docs but it is not going to be installed                 Recommends: ubuntu-session but it is not going to be installed or                             gnome-session but it is not going to be installed  E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.  

How to solve it?

Thanks in advance for advice.

How to register a variable in one role and use it in another one in Ansible

Posted: 04 Mar 2022 05:55 AM PST

I am trying to register a variable in a role and then use it in another one.

Here is the different files I'm using :

playbook.yml

---  - hosts: hostsgroup1    [...]    roles:      - role1  - hosts: 127.0.0.1    connection: local    roles:      - role2  

role1/tasks/main.yml

- name: Example 1    [...]  - name: Example 2    shell:      qm agent {{ VM_id }} network-get-interfaces |grep ip-address |grep '172.20' |grep -oE '((1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.){3}(1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])'    register: var_role1  

role2/tasks/main.yml

- name: Adding server to bastion    ansible.builtin.debug:      msg : Test {{ var_role1.stdout }}  

For the information, the qm agent command give me an IP address and I want to use it in the second role. But obviously, for now it displays me an error when I execute the playbook :

fatal: [127.0.0.1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['proxmoxhosts']\" is undefined\n\nThe error appears to be in '/root/ansible/roles/bastion_add/tasks/main.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# tasks file for bastion_add\n- name: Adding server to bastion\n  ^ here\n"}  

To summarize, I want to use var_role1, registered in role1, in role2.

Thanks !

Can't configure SMTP encryption - postfix

Posted: 04 Mar 2022 05:38 AM PST

I have docker-mailserver and Roundcube in containers, beside there is MySQL database for mail data and user passwords. Dovecot inside is configured to verify logging in users passwords with database. Yesterday I've configured IMAP and it is working properly. Also Roundcube is working with no problem. Now I am facing problem configuring secure connection for SMTP. Even if there is setting "require" and similar to "always use STARTTLS" I am not getting possibility to send emails with secure connection. Plain (insecure) connections works ok.

My postfix-main.cf file:

smtpd_use_tls = yes  smtpd_tls_cert_file = /etc/dovecot/fullchain.pem  smtpd_tls_key_file = /etc/dovecot/privkey.pem    smtpd_tls_eecdh_grade = strong  smtpd_tls_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtpd_tls_mandatory_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtpd_tls_mandatory_ciphers = high  smtpd_tls_security_level=may  smtpd_tls_ciphers = high  tls_preempt_cipherlist = yes  smtpd_tls_mandatory_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL  smtpd_tls_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL  smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1    # smtpd_tls_auth_only = yes    # smtp_use_tls = yes    # smtp_enforce_tls = yes  # smtpd_enforce_tls = yes  

If I uncomment last 4 lines I am getting problems sending emails via Roundcube (SMTP server expects secure connection but on server it is not configured on Roundcube side). And also there is no possibility to secure SMTP communication from my home Thunderbird. TB with insecure connection works ok.

I've seen the documentation here:

http://www.postfix.org/SASL_README.html

but it does not help much.

What is the proper configuration needed to make postfix/dovecot work with STARTTLS?

EDIT:

Configuration: https://pastie.io/hxcfkw.ini

What I am getting at connection is:

# telnet localhost 587  Trying 127.0.0.1...  Connected to localhost.localdomain.  Escape character is '^]'.  220 mail.correct_domain.com ESMTP  EHLO test.com  250-mail.correct_domain.com  250-PIPELINING  250-SIZE 10240000  250-ETRN  250-AUTH PLAIN LOGIN  250-AUTH=PLAIN LOGIN  250-ENHANCEDSTATUSCODES  250-8BITMIME  250-DSN  250 CHUNKING  ^]  telnet> quit  Connection closed.  

So seems like the server is not offering any security.

Can I set multiple TXT record on my DNS? (in order to proove the ownership of a domain to 2 mailboxes system)

Posted: 04 Mar 2022 04:50 AM PST

I am not a system\network engineer (I am a software developer) so this is not my cup of tea.

I have the following problem: I am configuring some Office 365 mailboxes for a client (that at the moment is using another old mail service).

Following this official documentation: https://docs.microsoft.com/en-us/microsoft-365/admin/get-help-with-domains/create-dns-records-at-any-dns-hosting-provider?view=o365-worldwide

I have to add the TXT record (provided into the Office 365 control panel) in order to proove that I have the ownership of the specified domain.

So this is not my territory and I have the following doubt: at the moment they are using another e-mail system (that will be replaced but that must still work for some days, I can't stop it now).

So set this TXT record means to add a new TXT record (in order to prove thath I own this domain). In this case I will have 2 TXT record (one for Office 365 and the other one for the old memail system and both should works fine) or it means that I have to replace the old TXT record? (in this case I suppose that the old e-mail system will not recognize my domain anymore and it could be a problem).

So my doubt is: in a domain can I have more that a single TXT record? This can have some impact on the old e-mail system that must still work for some days?

Thank you

RewriteRule won't match if trailing slash present, throws nginx 404

Posted: 04 Mar 2022 04:31 AM PST

With this .htaccess...

RewriteEngine on  RewriteRule ^foo$ foo.php  RewriteRule ^foo/$ foo.php  

...I get the following behavior, when I request

  • /foo — works, I get foo.php
  • /foo/ — fails unexpectedly with a bare 404 from nginx
  • /foobar — fails, as expected, but with a pretty 404 ErrorDoc from nginx

Note: there is no folder foo present.

Am I missing something or is this the hosting provider's fault, like a nginx proxy misconfiguration? On a different apache, this setup works as expected.
(I'm trying to drill down why WordPress's (default-ish) .htaccess doesn't work.)

cmd pipe -> System cannot find specified path

Posted: 04 Mar 2022 05:50 AM PST

Using cmd on Windows 10 Pro 21H2, when I try

echo Hello | find "Bye"  

I get The system cannot find the specified path. Same thing if trying

echo Hello | C:\Windows\System32\find.exe "Bye"  

So PATH does not appear to be the problem.

I need this working because of how Visual Studio Code connects to ssh servers:

type "C:\Users\thomedes\AppData\Local\Temp\vscode-linux-multi-line-command-vpc-13769646.sh" | ssh -T -D 64480 server bash  

which gives exactly the same issue.

EDIT:

Just tried on an old machine with Windows XP. Works flawlessly.

EDIT:

System info. It's in Spanish, but should be easy to understand. I'ts a normal Windows 10 installation. No magic tricks.

C:\Users\thomedes>dir echo*  El volumen de la unidad C es Windows  El número de serie del volumen es: XXXX-XXXX    Directorio de C:\Users\thomedes    No se encuentra el archivo    C:\Users\thomedes>dir find*  El volumen de la unidad C es Windows  El número de serie del volumen es: XXXX-XXXX    Directorio de C:\Users\thomedes    No se encuentra el archivo    C:\Users\thomedes>where find  C:\Windows\System32\find.exe    C:\Users\thomedes>where echo  INFORMACIÓN: no se pudo encontrar ningún archivo para los patrones dados.  

Some more info, it runs fine when done like this:

C:\Users\thomedes>echo Hello > foo  C:\Users\thomedes>find "Bye" < foo  

TLS version switching to 1.0 when DNS changes in python3.6.8 application on rhel 7

Posted: 04 Mar 2022 04:36 AM PST

I have an application written in python 3.6.8 and one weird issue that our Network team reported to us is that whenever IP in the DNS server changes, our application sends tls1.0 requests instead of tls1.2 which it sends usually. And this change happens only for the first request. Succeeding requests are in tls1.2. We have tried restricting the TLS version to 1.2 in the requests library but the issue persists. Any idea why the TLS version changes for that first request only?

OpenSSL version: 1.0.2k

Python version: 3.6.8

Also, is this issue related to this older OpenSSL version or Python version?

Resize server succeed but not applied

Posted: 04 Mar 2022 02:59 AM PST

I would like to resize the server via this api. It returns 202 but the flavor is not applied. Its status is not changed to VERIFY_RESIZE. The status is Active. I also tried it with SHUTOFF status. The same thing has happened via CLI and horizon.

When I tried to confirm the resize via this, it does not work as the status is not VERIFY_RESIZE.

Target pool with multiple load balancers?

Posted: 04 Mar 2022 04:36 AM PST

As it's not possible to use one target group with multiple ELBs in AWS at the time this question is asked, Is it possible to assign the same target pool to multiple cloud load balancers in Google Cloud Platform?

EDIT

My app is a multi-tenant app, that should server thousands of domains, I came up with this solution using AWS ECS considering the limitations of:

  • Certificates number per load balancer
  • Target groups per ECS service.

enter image description here

So, I am thinking to serve the domain not only through one cluster, but multiple cluster sharing, as in the above diagram, its %50, %50 spreading of traffic on two clusters, in a method called active-active as I read at Shopify blog.

systemd[1]: Failed to start Advanced key-value store

Posted: 04 Mar 2022 05:05 AM PST

A little rodeo with redis. The error issued by redis is pretty unspecific ...

When doing journalctl -xe I got:

-- The process' exit code is 'exited' and its exit status is 1.  Nov 05 20:53:34 servername systemd[1]: redis-server.service: Failed with result 'exit-code'.  -- Subject: Unit failed  -- Defined-By: systemd  -- Support: http://www.ubuntu.com/support  --  -- The unit redis-server.service has entered the 'failed' state with result 'exit-code'.  Nov 05 20:53:34 servername systemd[1]: Failed to start Advanced key-value store.  -- Subject: A start job for unit redis-server.service has failed  -- Defined-By: systemd  -- Support: http://www.ubuntu.com/support  --  -- A start job for unit redis-server.service has finished with a failure.  --  -- The job identifier is 184424 and the job result is failed.  

When doing sudo service redis status:

sudo service redis status  ● redis-server.service - Advanced key-value store       Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset:     enabled)       Active: failed (Result: exit-code) since Thu 2020-11-05 20:53:35 UTC; 11min ago         Docs: http://redis.io/documentation,           man:redis-server(1)      Process: 1468552 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=1/FAILURE)    Nov 05 20:53:35 servername systemd[1]: redis-server.service: Scheduled restart job, restart counter is at 5.  Nov 05 20:53:35 servername systemd[1]: Stopped Advanced key-value store.  Nov 05 20:53:35 servername systemd[1]: redis-server.service: Start request repeated too quickly.  Nov 05 20:53:35 servername systemd[1]: redis-server.service: Failed with result 'exit-code'.  Nov 05 20:53:35 servername systemd[1]: Failed to start Advanced key-value store.  

So how to resolve?

How to avoid automatic patching for SQL Server 2016 via Windows automatic update service?

Posted: 04 Mar 2022 03:03 AM PST

While updating OS patches, we see that SQL Server is also receiving hotfix patches; we don't want to install SQL Server patches and we don't want to stop OS patches from installing.

Microsoft says "By default, Windows Update client is configured to provide updates only for Windows. If you enable the Give me updates for other Microsoft products when I update Windows setting, you also receive updates for other products, including security patches for Microsoft SQL Server and other Microsoft software."

I did check this setting on the server and it was off and grayed out.

Hence, I believe when SQL Server was installed, the below option was checked and that is causing it to receive updates:

image

So how can we disable it through some policy or registry key?

Am able to ping my domain, but not subdomain (digitalocean)

Posted: 04 Mar 2022 04:04 AM PST

My main domain (say: example.com) on DigitalOcean is working ok. I've only one droplet in there. Then I created 'a' record under my main domain with another subdomain name (1.example.com).

Then, I created another subdomain (2.example.com), in the way as we create a new domain in DO, and made it refer to the same droplet's ip address as my main domain's. Hope I'm able to clear myself.

And Problem is that I'm able to ping example.com, but not able to reach 1.example.com or 2.example.com (both created slightly diff ways in DO). Its been more than an hour since then. I've tried reducing ttl from 3600 to 60 or 600. Ping says "no address associated with hostname". My actual subdomain name are 1.bobu.xyz and 2.bobu.xyz

If I dig these subdomains in Windows Bash, they show the 'a' records pointing to DO's name servers. But no else record is there. How can I reach them/ping them? What am I missing?

iptables rules for NAT with FTP

Posted: 04 Mar 2022 03:03 AM PST

I'm trying to create a NAT function in order to achieve 2 tasks at a time.

  1. Users from public network are able to access the FTP server
  2. Users in the LAN are able to use same WAN address 203.X.X.X to access the FTP server
network topology                                 [---] win10 PC     \       /                   [ - ] 10.0.0.4  [wireless router]------------- [ _ ]  WAN:203.x.x.x                   _______   LAN gateway:10.0.0.138         /      / laptop **linux FTP server**                                 /______/  iptables **NAT running here**                                \       \ wlan0:10.0.0.113                                 \_______\    port:20,21                                               passive:6000:7000  

Now the FTP server is only accessible trough LAN ftp://10.0.0.113 I want to forward a port to local FTP server, in this case any user would be able to use WAN address 203.x.x.x to log in FTP server. I use Windows 10 to do the test which is in the same LAN.

*nat  :PREROUTING ACCEPT [280:86644]  :INPUT ACCEPT [79:4030]  :OUTPUT ACCEPT [0:0]  :POSTROUTING ACCEPT [0:0]  -A PREROUTING -j LOG  -A PREROUTING -d 203.213.238.12/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 10.0.0.113:21  -A PREROUTING -d 203.213.238.12/32 -p tcp -m tcp --dport 20 -j DNAT --to-destination 10.0.0.113  -A PREROUTING -d 203.213.238.12/32 -p tcp -m tcp --dport 6000:7000 -j DNAT --to-destination 10.0.0.113  -A OUTPUT -j LOG  -A OUTPUT -d 203.213.238.12/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 10.0.0.113:21  -A OUTPUT -d 203.213.238.12/32 -p tcp -m tcp --dport 20 -j DNAT --to-destination 10.0.0.113  -A OUTPUT -d 203.213.238.12/32 -p tcp -m tcp --dport 6000:7000 -j DNAT --to-destination 10.0.0.113  -A POSTROUTING -j LOG  -A POSTROUTING -d 10.0.0.113/32 -o wlan0 -p tcp -m tcp --dport 21 -j SNAT --to-source 10.0.0.138:21  -A POSTROUTING -d 10.0.0.113/32 -o wlan0 -p tcp -m tcp --dport 20 -j SNAT --to-source 10.0.0.138  -A POSTROUTING -d 10.0.0.113/32 -o wlan0 -p tcp -m tcp --dport 6000:7000 -j SNAT --to-source 10.0.0.138  COMMIT  # Completed on Thu Mar  2 19:40:51 2017  # Generated by iptables-save v1.4.21 on Thu Mar  2 19:40:51 2017  *filter  :INPUT ACCEPT [0:0]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [412:52590]  -A INPUT -i wlan0 -j ACCEPT  -A FORWARD -o wlan0 -j ACCEPT  -A FORWARD -i wlan0 -j ACCEPT  COMMIT  

I'm not sure what I missed or there are some logical mistakes in the configuration. any help would be appropriated.

Copy VM Snapshot to a new VM Environment

Posted: 04 Mar 2022 04:43 AM PST

We have 2 separate VMWare environments, one is the main environment which has hundreds of virtual machines across lots of sites. The other is a much smaller one installed on one server, just for archiving old systems.

What I would like to do is take a snapshot of the current state of one of our live VMs, and use that to copy across to the other VMWare environment and create a new machine there, using that as the archive of that system.

Is this going to be possible/easy?

Convert virtual disk image to physical disk?

Posted: 04 Mar 2022 06:06 AM PST

I'm trying to use QEmu for Windows to convert a virtual disk image to a physical SSD. But I'm not sure about the syntax for the output_filename parameter. Here's what I tried:

qemu-img convert -p "D:\Virtual Machines\LinuxMint\LinuxMint-System.vdi" -O raw \\.\PHYSICALDRIVE5  

But I get this error:

qemu-img: Could not open '-O': Could not open '-O': Invalid argument  

Note that I do not have any partitions on the output drive - it's a bare drive.

Also, I only want to do this if I'll see a decent performance gain - so if it doesn't make a difference, then I won't bother.

How to access Hadoop remotely?

Posted: 04 Mar 2022 04:42 AM PST

I have installed Hadoop on open-stack CentOS guest VM. I'm able to open the site:

(From 192.168.0.10, VM-1)  http://localhost:50070  http://192.168.0.10:50070  

But not able to access the same from a remote machine (My Computer).

http://210.84.35.1:50070  

Here is my network diagram:

                                                          Open-Stack                                 ---------------------------------------------------------------                                                                 |                                                             |                                 |   [Remote Network]          [Open-Stack-VM Network]         |                                 |                                                             |                                 |                   |           |192.168.0.10                 |                                  |  210.84.35.0/24   |           |____________ [CentOS VM-1]   |   [My-Computer] ----(Internet)---|--- _______________|___________|                             |                                  |                   |           |192.168.0.11                 |                                  |                   |           |____________ [CentOS VM-2]   |                                 |-------------------|-----------|-----------------------------  

Asterisk Penalty for dynamic Agents

Posted: 04 Mar 2022 04:04 AM PST

Is there a way to use penalty with dynamic agents to order agents call distribution for received calls in a queue?

We are using linear ring strategy and this only order calls to dynamic agents by their order in the login.

Running multiple instances of a Batch file in windows simultaneously?

Posted: 04 Mar 2022 06:06 AM PST

I have a windows batch file that is invoked by windows scheduler. When I try to have multiple windows scheduler tasks trying to run the batch file simultaneously, the batch file is locked by the first process and the all the other instances fail.

Is there is way in Windows to run multiple instances of batch file simultaneously?

My script is a simple one all it does is:

set java_classpath  java javaClass  

Automatically binding applications to a network interface based on the account they are started with

Posted: 04 Mar 2022 05:05 AM PST

Im looking for a way to have applications started from different accounts automatically bind to a specific network interface. For example: applications started on accountA bind to eth0 and applications started from accountB bind to eth1. Is there any way I can accomplish this? I hope this is easier to understand. I would like to do this because im looking to share a dedicated server with someone. It would be beneficial if we could have account specific ip's so we could both run services requiring the same port without the hassle of trying to bind every application.

Tell Rsync to Skip Current file During Transfer?

Posted: 04 Mar 2022 02:48 AM PST

Is there a way to tell rsync to skip its current file while a sync is in progress, maybe by sending it a particular signal?

I already know about ignoring based on patterns, but this would be handy to me sometimes.

No comments:

Post a Comment