Thursday, May 19, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


How to trigger a condition alert like an if else statement in solarwinds

Posted: 19 May 2022 11:22 AM PDT

I'm a little confuse on how to deal with the alerts using Solarwind's trigger condition.

I want everything to be optimized well in alerting us, what I want to happen is 1. IF the router is DOWN it only alerts the router and not both AP and SWITCH. 2. but if either the AP or SWITCH is down alert them only.

on the statement 1. I have configured the condition, I can alert the router only and it does not alert both AP and SWITCH, and that is fine. but on my condition when the AP or the SWITCH is down it does not throw any alerts because of the condition I made.

Please see the picture so visualize on how it will work.

  1. (IF THE ROUTER IS DOWN alert only the router not the SWITCH & AP)
  2. (ELSE IF EITHER THE SWITCH OR AP IS DOWN ALERT THEM INDIVIDUALLY)

enter image description here

Thank you.

How to check in what request-response mode my HAProxy is operating in?

Posted: 19 May 2022 11:16 AM PDT

I have read that

Load balancers/reverse proxies usually have 2 operation modes.

In the first one, the requests from the clients are forward to one of the backends as is if they come directly from the source. Is this case the LB only redirects the request and the backend answers back directly to the client.

On the second mode, the LB answers the request and then creates a new one to the backend with the content from the initial one. Then receives the answer and forwards it to the client.

How can I check what mode my HAproxy is operating in and how can I switch from one mode to other

VM Logical Volume shown on Host

Posted: 19 May 2022 09:55 AM PDT

Some context, I've builded an Opennebula cloud (https://opennebula.io/). I've been using it for more than 2 years now without any issue on that side. Recently i've noticed a strange behavior.

On my hypervisor which are the servers who are running libvirt and the KVM I am able to see to logical volume of a guest. Morever as I am using an LVM datastores to operate Opennebula I see all logical volume of all the recent guest.( This does not apply for the old VMs)

Here the example.

root@xxx:/home/xxe# lvs    LV           VG            Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert                                                    THIS IS NOT EXPECTED                                                     image        on-disk       -wi-------  140,00g                                                        lv-mysql     on-disk       -wi-------  <25,00g                                                        data         opt           -wi-----p-   29,99g                                                        data         opt           -wi-------  <50,00g                                                        data         sql           -wi-------  <50,00g                                                        image        vg-one-101    -wi-------  190,00g             THIS IS EXPECTED    lv-one-164-0 vg-one-104    -wi-a-----   24,00g                                                        lv-one-167-0 vg-one-104    -wi-a-----  <24,59g                                                        lv-one-167-2 vg-one-104    -wi-a-----   50,00g                                                        lv-one-167-3 vg-one-104    -wi-a-----   50,00g                                                        lv-one-168-0 vg-one-104    -wi-ao----  <24,59g                                                        lv-one-168-2 vg-one-104    -wi-ao----  100,00g                                                        lv-one-173-0 vg-one-104    -wi-ao----  <24,59g                                                        lv-one-174-0 vg-one-104    -wi-ao----  <24,59g                                                        lv-one-174-2 vg-one-104    -wi-ao----  120,00g               

Not really asking on Opennebula way of working but more around KVM. Why for some and only some of the guest logical volume are displayed on the host ?

Getting SERVFAIL / NOTAUTH on Zone Transfer - ISC BIND 9

Posted: 19 May 2022 09:18 AM PDT

I have two BIND servers running BIND 9:

BIND 9.11.36-RedHat-9.11.36-3.el8 (Extended Support Version) <id:68dbd5b>  running on Linux x86_64 4.18.0-372.9.1.el8.x86_64 #1 SMP Tue May 10 08:57:35 EDT 2022  built by make with '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-python=/usr/libexec/platform-python' '--with-libtool' '--localstatedir=/var' '--enable-threads' '--enable-ipv6' '--enable-filter-aaaa' '--with-pic' '--disable-static' '--includedir=/usr/include/bind9' '--with-tuning=large' '--with-libidn2' '--enable-openssl-hash' '--with-geoip2' '--enable-native-pkcs11' '--with-pkcs11=/usr/lib64/pkcs11/libsofthsm2.so' '--with-dlopen=yes' '--with-dlz-ldap=yes' '--with-dlz-postgres=yes' '--with-dlz-mysql=yes' '--with-dlz-filesystem=yes' '--with-dlz-bdb=yes' '--with-gssapi=yes' '--disable-isc-spnego' '--with-lmdb=no' '--with-libjson' '--enable-dnstap' '--with-cmocka' '--enable-fixed-rrset' '--with-docbook-xsl=/usr/share/sgml/docbook/xsl-stylesheets' '--enable-full-report' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS= -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld' 'CPPFLAGS= -DDIG_SIGCHASE' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'  compiled by GCC 8.5.0 20210514 (Red Hat 8.5.0-10)  compiled with OpenSSL version: OpenSSL 1.1.1k  FIPS 25 Mar 2021  linked to OpenSSL version: OpenSSL 1.1.1k  FIPS 25 Mar 2021  compiled with libxml2 version: 2.9.7  linked to libxml2 version: 20907  compiled with libjson-c version: 0.13.1  linked to libjson-c version: 0.13.1  compiled with zlib version: 1.2.11  linked to zlib version: 1.2.11  linked to maxminddb version: 1.2.0  compiled with protobuf-c version: 1.3.0  linked to protobuf-c version: 1.3.0  threads support is enabled    default paths:    named configuration:  /etc/named.conf    rndc configuration:   /etc/rndc.conf    DNSSEC root key:      /etc/bind.keys    nsupdate session key: /var/run/named/session.key    named PID file:       /var/run/named/named.pid    named lock file:      /var/run/named/named.lock    geoip-directory:      /usr/share/GeoIP  

The master server is at 172.16.19.243 and the secondary at 172.16.19.251. They can ping each other and port 53 is open on both. Both used to work, but some new code was pushed in our automation and both lost network access for around two hours. It is possible the configuration was changed.

The secondary shows no zone files in /etc/named/. Zone transfers fail:

DNS-Secondary named[546308]: general: info: zone 19.16.172.in-addr.arpa/IN: refresh: unexpected rcode (SERVFAIL) from master 172.16.19.251#53 (source 0.0.0.0#0)  

/var/log/named/zone_transfers on the primary show:

xfer-out: info: client @0x7f48600ebf90 69.61.12.108#47302 (ns4.mydomain.net): bad zone transfer request: 'ns4.mydomain.net/IN': non-authoritative zone (NOTAUTH)  ... 3 days later outage occurs, but no logs appear ...  ... a few hours after the outage and repeating to present day ...  notify: info: zone mydomain.net/IN: sending notifies (serial 2022051909)  notify: info: zone 19.16.172.in-addr.arpa/IN: sending notifies (serial 2022051909)  notify: info: zone 16.16.172.in-addr.arpa/IN: sending notifies (serial 2022051909)  notify: info: zone 17.16.172.in-addr.arpa/IN: sending notifies (serial 2022051909)  notify: info: zone 18.16.172.in-addr.arpa/IN: sending notifies (serial 2022051909)  

The problem is not resolved by running rndc retransfer mydomain.net. Requesting AXFR with dig also fails:

dig -t axfr mydomain.net 172.16.19.243    ; <<>> DiG 9.11.36-RedHat-9.11.36-3.el8 <<>> -t axfr mydomain.net 172.16.19.243  ;; global options: +cmd  ; Transfer failed.  ; Transfer failed.  

Querying A records and PTRs from the internet to master works. Doing the same to the secondary now fails:

dig @172.16.19.251 191.19.16.172.in-addr.arpa ptr    ; <<>> DiG 9.18.2 <<>> @172.16.19.251 191.19.16.172.in-addr.arpa ptr  ; (1 server found)  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 57626  ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1  ;; WARNING: recursion requested but not available    ;; OPT PSEUDOSECTION:  ; EDNS: version: 0, flags:; udp: 1232  ; COOKIE: 204e72e23787aef415f9ec7562866219e93a158c23f1f323 (good)  ;; QUESTION SECTION:  ;191.19.16.172.in-addr.arpa.    IN      PTR    ;; Query time: 48 msec  ;; SERVER: 172.16.19.251#53(172.16.19.251) (UDP)  ;; WHEN: Thu May 19 10:28:39 CDT 2022  ;; MSG SIZE  rcvd: 83  

The /etc/named.conf of the master is shown below:

options {          allow-query {            none;          };          allow-transfer {            none;          };          recursion no;            auth-nxdomain no;    # conform to RFC1035          minimal-responses yes;          minimal-any yes;          dnssec-enable yes;          dnssec-validation yes;  };    zone "." IN {          type hint;          file "named.ca";  };    include "/etc/named.rfc1912.zones";  include "/etc/named.root.key";    //System Zones  zone "mydomain.net" IN {    type master;    file "/etc/named/mydomain.net.db";    allow-query {any;    };    allow-transfer {      localhost;      172.16.19.243;    };    notify yes;  };    zone "16.16.172.in-addr.arpa" IN {    type master;    file "/etc/named/16.16.172.in-addr.arpa.rev";    allow-query {any;    };    allow-transfer {      localhost;      172.16.19.243;    };    notify yes;  };  // Zones for 17 - 19 are included in the config with the *exact* same format. Programmatically generated - if there's  // a typo here, then there is in all. No zone transfers work.  

/etc/named/16.16.172.in-addr.arpa on the master is as follows:

$TTL 86400    @ IN SOA ns3.mydomain.net. admin.mydomain.net. (                                                  2022051909 ;Serial                                                  3600 ;Refresh                                                  1800 ;Retry                                                  604800 ;Expire                                                  86400 ;Minimum TTL  )    ;; This Name Server, and needed A record  @ IN NS ns3.mydomain.net.  ns3 IN A 172.16.19.243    ;; All Zone NS Records  @ IN NS ns3.mydomain.net.  @ IN NS ns4.mydomain.net.    ;; All Zone PTR Records    * IN PTR HDN-UIDO  

Again, no DNS lookups for any record works on the secondary, but all work on master. No zones transfer from the master to the secondary. All zones and configurations are generated programmatically, so if there is an error in one zone, it will be present for all. No other errors of note have been found in the logs. No SELinux denials on either server. Permissions of /etc/named/ are 0770 root:named system_u:object_r:named_conf_t:s0 on both servers. Removing all .jnl files did not help (there was only one on the master, and not in /etc/named).

What could be the cause? Thank you.

redundant load balancer for Tomcat

Posted: 19 May 2022 10:00 AM PDT

I have three Tomcat webservers in a VMWare cluster.

In the first place we thought of using Apache as a load balancer in physical server but this would be a SPOF.

I have searched around and I found this discussion but I would need some more info. Does it make sense to include the two (or more) HAProxy servers as virtual machines and not run them on physical servers? Can this active-passive configuration be configured using Apache? I have searched around and I found many active-passive configurations for Apache BUT as Web Server, not as a load balancer.

error 17054 severity 16 state 1 sql server 2014 enterprise edition

Posted: 19 May 2022 09:12 AM PDT

Our new database is just inaccessible. I have tried to find it in the error log and find this error during that time. So when we went to the Configuration manager. we were seeing Browser service, SQL Agent, and SQL Server services were stopped, and then tried to restart it just hung and because of that, I have to do a reboot. It is a production server and this is a recurring issue. It happened a month ago and last week it happened and also it happened today. So my DBA has repaired the SQL instance to make the server operation. I am not seeing any issue with database integrity though but I could not find the resolution yet.  This is a SharePoint database. Do you have this kind of situation? How did you solve that?

SQL Server log during the time of SQL server inaccessible

How to copy a file to aws ec2 instance and use it in the user-data?

Posted: 19 May 2022 11:11 AM PDT

I have an rpm file, which I want to install using user-data of ec2-instance using terraform. I got file provisioner in a search result, but found that it will do the step after user-data.

Any suggestions how to do that?

Please suggest.

Kubernetes nginx ingress: How to redirect foo.example.org to example.org?

Posted: 19 May 2022 09:35 AM PDT

My ingress currently looks like this:

apiVersion: networking.k8s.io/v1beta1  kind: Ingress  metadata:    name: ingress    annotations:      kubernetes.io/ingress.class: nginx      cert-manager.io/cluster-issuer: letsencrypt-prod  spec:    tls:      - hosts:          - example.org          - app.example.org        secretName: prod-tls    rules:      - host: example.org        http:          paths:            - path: /              backend:                serviceName: app-service                servicePort: 80      - host: app.example.org          http:            paths:                - path: /                  backend:                    serviceName: app-service                    servicePort: 80  

But now I want to redirect app.example.org to example.org instead. How can I do this?

I found this example using ingress.kubernetes.io/configuration-snippet but I don't know what domains that will apply to?

I'm using Helm nginx-ingress-1.37.0; app ver 0.32.0.

DNS was changed but I was still seeing old site

Posted: 19 May 2022 09:03 AM PDT

I am facing a weird problem. I have a domain which is pointed to a server where my site is hosted, now the DNS of the domain was mistakenly changed to something else and it started pointing to somewhere else, but I was still able to see, login and edit that website but when someone tried to access it from some other country it started giving him error

What could be the possible that I was still able to see all this even though DNS was changed??

How to disable IGMP in Raspbian

Posted: 19 May 2022 10:02 AM PDT

I am writing testcases for the IGMP and MLD implementation of a network switch. Those testcases run on Raspbian. However, Raspbian seems to regularly send IGMP reports/queries of its own, which interfere with my testcases. How can I disable those packets, either globally or for a given interface?

I have seen this answer, but had no luck in figuring out which process generates the IGMP traffic. I do not have any applications installed that require multicast groups, to the best of my knowledge.

Because I need to send and receive IGMP with Scapy for the testcases, just blocking IGMP in the firewall is not an option.

Here is the traffic in question:

pi@raspberrypi204:~ $ sudo tshark -i any -Y igmp  tshark: Lua: Error during loading:  [string "/usr/share/wireshark/init.lua"]:46: dofile has been disabled due to running Wireshark as superuser. See http://wiki.wireshark.org/CaptureSetup/CapturePrivileges for help in running Wireshark as an unprivileged user.  Running as user "root" and group "root". This could be dangerous.  Capturing on 'any'  [...]  196  62.189350 192.168.178.202 -> 224.0.0.22   IGMPv3 64 Membership Report / Join group 224.0.0.252 for any sources / Join group 224.0.1.12 for any sources  197  62.344484 192.168.178.201 -> 224.0.0.22   IGMPv3 62 Membership Report / Join group 224.0.0.252 for any sources  198  62.356118 192.168.178.201 -> 224.0.0.22   IGMPv3 62 Membership Report / Join group 224.0.0.252 for any sources  199  62.357405 192.168.2.206 -> 224.0.0.22   IGMPv3 62 Membership Report / Join group 224.0.0.251 for any sources  201  62.361857 192.168.178.201 -> 224.0.0.22   IGMPv3 62 Membership Report / Join group 224.0.0.252 for any sources  206  62.384387 192.168.178.201 -> 224.0.0.22   IGMPv3 62 Membership Report / Join group 224.0.0.252 for any sources  [...]  

I am using:

pi@raspberrypi204:~ $ lsb_release -a  No LSB modules are available.  Distributor ID: Raspbian  Description:    Raspbian GNU/Linux 8.0 (jessie)  Release:        8.0  Codename:       jessie  

sshd is already running though keeps trying to start

Posted: 19 May 2022 08:59 AM PDT

I have a Centos 7 server and sshd is running and accepting connections just fine.

The problem is, that messages log keeps reporting failed sshd startup attempts and secure log keeps reporting that sshd can't start because port 22 is in use.

messages;

Mar 15 12:03:01 ded2100 systemd[1]: Starting Session 10614 of user root.
Mar 15 12:03:05 ded2100 systemd[1]: sshd.service start operation timed out. Terminating.
Mar 15 12:03:05 ded2100 systemd[1]: Failed to start OpenSSH server daemon.
Mar 15 12:03:05 ded2100 systemd[1]: Unit sshd.service entered failed state.
Mar 15 12:03:05 ded2100 systemd[1]: sshd.service failed.

secure;

Mar 15 12:01:34 ded2100 sshd[14947]: error: Bind to port 22 on 0.0.0.0 failed: Address already in use.
Mar 15 12:01:34 ded2100 sshd[14947]: error: Bind to port 22 on :: failed: Address already in use.

Today my server failed. SSH went as well requiring a hard reboot and I want to make sure sshd is as solid as it can be so I can rest assured if it can be up, it will be up.

Thanks.

EDIT
My sshd_config is here - https://gist.github.com/cbiggins/3cb4fcc1af25da63e89b1fab2eb7d57c

EDIT #2

[root@ded2100 log]# ss -p -o state listening '( sport = :ssh )'  Netid  Recv-Q Send-Q                                  Local Address:Port                                                   Peer Address:Port  tcp    0      128                                                 *:ssh                                                               *:*                     users:(("sshd",pid=1956,fd=3))  tcp    0      128                                                :::ssh                                                              :::*                     users:(("sshd",pid=1956,fd=4))  

How to create subdomains using nginx and proxy_pass for each

Posted: 19 May 2022 10:02 AM PDT

I currently have nginx setup for my server at my.server.com. Using the current configuration I access different applications using http://my.server.com/app1 or http://my.server.com/app2.

I have an apps.conf placed in /etc/nginx/sites-enabled/ this is what it looks like:

upstream app1_servers {      server 172.12.11.10:8080;  }  upstream app2_servers{      server 172.12.11.10:9090;  }  server {      listen 80;      server_name my.server.com;      return 301 https://my.server.com/$request_uri; #force https  }    server {      listen 443;      ssl on;      ssl_certificate /etc/ssl/my.server.com/cert.pem;      ssl_certificate_key /etc/ssl/my.server.com/priv.pem;      server_name my.server.com;        location /app1 {              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;              proxy_set_header X-Forwarded-Host $host;              proxy_set_header X-Forwarded-Server $host;              proxy_set_header X-Forwarded-Proto https;              proxy_pass http://app1_servers/app1;              proxy_redirect http://$host https://$host;            proxy_set_header Host $host;      }      location /app2 {              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;              proxy_set_header X-Forwarded-Host $host;              proxy_set_header X-Forwarded-Server $host;              proxy_set_header X-Forwarded-Proto https;              proxy_pass http://app2_servers/app2;              proxy_redirect http://$host https://$host;              proxy_set_header Host $host;      }  }  

Question

The above works fine. However, now I would like to change how I access app1 and app2. I would like to access them by http://app1.my.server.com and http://app2.my.server.com while still maintaining force ssl and doing proxy_pass

What configuration changes do I need to make for this to take effect? Additionally, I would like to keep configuration settings for each application in its separate file.

Linux diskless boot - NFS share not mounting during ramdisk boot

Posted: 19 May 2022 09:03 AM PDT

(This is my first post so hopefully I'm formatting it correctly). I've added in as much information as possible without being TL:DR.

My basic issue is that I hit walls when trying to do a PXE diskless boot to an NFS server (CentOS 6.7 or CentOS 7). I have tried various things and I can't seem to replicate the success that I initially had with a CentOS7 server and client. Every time I follow my notes now I'm getting nowhere.

The most common errors I am getting (depending on what initrd.img file or initramfs*.img I use) is

A ticker of *** that shows a text-based progress bar and the message    A start job is running for dev-nfs.device (xx s / 1min 30s)  

Then it times out and says

Timed out waiting for device dev-nfs.device  Dependency failed for File System Check on /dev/nfs  Dependency failed for /sysroot  Dependency failed for Initrd Root File System  Dependency failed for Reload Configuration from the Real Root  

The above error occurs when I copy (any of) the initramfs-3.10.*.img from /boot/ to the PXE image location.

If I try to generate a new initramfs image file from dracut, it also throws the above error.

dracut initramfsnew.img  

It could be that I either don't know how to generate a proper initramfs or I'm really not understanding the initrd.img and initramfs functions. I believe that the timeout is happening because the NFS drivers are not yet loaded at that stage of the boot process so the client cannot properly mount the NFS share. The reason I think this is because I've booted up the exact same PXE client into its local OS and manually mounted the NFS share and it works 100%, so the NFS share is active, and works. I believe that I have the wrong understanding of how initrd.img and initramfs*.img work.

If I download initrd.img from a CentOS mirror site, I get 90% of the way there and then the error changes to

No /sbin/init trying fallback  

I am now in a (for want of a better term, half-loaded) shell that gives me basic navigation of the NFS share. I can go to the /home/disklessuser/ and even write to the NFS or read new files from the NFS (tested simple 'touch' commands on both server and client). What seems to be missing, primarily, is the login option in this instance, as well as a proper boundary for the directories (i.e. I seem to be logged in as root at this point in the boot-up).

The basic configuration is pretty standard AFAIK:

/var/lib/tftpboot/pxelinux.cfg/default contains (I've left out the bits that I know work - the PXE works and points to the right image etc):

menu label ^1) CentOS 7    kernel /images/centos7/vmlinuz    append root=/dev/nfs initrd=/images/centos7/initrd.img nfsroot=10.10.10.10:/srv/nfs/diskless/images/centos7/root rw selinux=0  

I've tried variants of the above, like replacing the initrd.img with initramfs3.10*.img (various versions located in the server's /boot/) and have tried adding in parameters like

ip=dhcp  

because dracut documentation suggests this will tell it to get the nfsroot path from DHCP instead of the PXE menu.

I've currently got my DHCP configured as so:

subnet 10.10.10.0 netmask 255.255.255.0 {     option broadcast-address 10.10.10.255;     option routers 10.10.10.1;     range 10.10.10.100 10.10.10.150;       next-server 10.10.10.10;     option root-path "10.10.10.10:/var/lib/tftpboot";     filename "pxelinux.0";  }  

Possibly this is conflicting with the NFS share prescribed in the PXE menu?

Anyway, I would appreciate any guidance - perhaps most pertinently for me is what to do about the initrd or initramfs. I presume there's not much different about both, but how would one generate a new one that should (hopefully) include basic network drivers to allow an NFS mount?

Secondly, why is /sbin/init missing when I'm near as heck at the solution when I use the initrd.img stored in the CentOS mirror directory under /os/x86_64/isolinux ?

Deleted printers keeps coming back - and multiply

Posted: 19 May 2022 09:08 AM PDT

My users are on 2012 R2 RDS Session Host servers.

I've used "Deploy Printers" (from Print Manager) to deploy 4 printers. The last week, I've had a lot of problems where users can't print. If I deleted the printer and added it again, they could print just fine.

Now I've removed all printer deploying from GPO - and I have no printers in any login scripts. I did a gpupdate /force, but all the 4 printers are now listed 3 times...

enter image description here

If I delete the printers and log off and back on, all the printers are popping up again. Sigh! This is driving me nuts.

This script doesn't show any of the "SVFREJA" printers...

Set objWMIService = GetObject("winmgmts:\\.\root\cimv2")  Set colPrinters = objWMIService.ExecQuery ("Select * From Win32_Printer")    If colPrinters.Count <> 0 Then 'If there are some network printers      Dim s      s = ""          For Each objPrinterInstalled In colPrinters ' For each network printer          s = s + objPrinterInstalled.Name + chr(13)          Next      msgbox s  End if  

It gives me this result...

enter image description here

(sry for the big picture)

My problem is not with the "redirected" printers, my problem is that I have several printers with the same name (on SVFREJA) and I can't get rid of them.

Any idea why I can't get rid of the "ophaned" printers??

How to start a new instance of QEMU based on the same image and snapshot?

Posted: 19 May 2022 11:00 AM PDT

I have a QEMU image (qcow2) with a snapshot stored in it. Right now I'm using libvirt to start it.

However, I want to be able to run more than one instance of the same image snapshot.

I guess I can do that by cloning the virtual-hd and installing/creating a new domain (virsh) and then running revert from snapshot. But I want to be able to do that pretty much "on-the-fly" with as little as possible latency from the time I decide I need to run another instance of image X to the time that instance is running from the stored snapshot. (I wan't to avoid writing to the hard-drive as possible)

Anyone did anything like that? I started thinking maybe libvirt is not low-level enough for this?

smb share takes forever to connect to from Mac OS X 10.7-8

Posted: 19 May 2022 11:00 AM PDT

Ive got a dozen users and half of them take forever to connect to the smb share coming from a windows server 2008 r 2 standard server. Some users instantly connect with no issue.

These Mac OS X workstations have been clean formatted to see if it was a OS issue but still some take forever to connect.

I am wondering if there is something on the server side that can assist.

No comments:

Post a Comment