SQL Server Kerberos auth Azure ad join device Posted: 27 Dec 2021 12:24 AM PST We are using Win10 devices that is not domain joined, they are azuread joined. We are using app that relay on SQL windows auth and works when device is domain joined. Our windows 10 devices have always on vpn setup and configured to obtain kerberos ticket to authenticate to ex on on-prem filshare and other on-premesis resources. Is it possible to configure sql to accept kerberos and possible to authenticate using win 10 devices that are not domain joined but have possibility to obtain kerberos ticke using always on vpn? |
Unable to create pipeline even having write permissions on github,how to fix that? Posted: 26 Dec 2021 11:43 PM PST I have write permissions to the repository. I can create branches, create pull requests and even merge them also. But, when creating pipeline on azure, getting below error. [![pipeline adding error][1]][1] What I need to add more in settings of github or azure to fix this? [1]: https://i.stack.imgur.com/gcC4w.png |
How to check how much egress will cost to an address? Posted: 26 Dec 2021 11:12 PM PST I'm planning to use Grafana cloud for logging, and there will be NAT & Egress costs for shipping the log. The log endpoint is logs-prod-004.grafana.net , and it seems to be a GCP HTTP load balancer. My infra is in asia-southeast2 & the endpoint should be in australia-southeast1 /2 . My question is, How much will the egress cost per GB to that endpoint? I couldn't find an egress to load balancer in the networking pricing page. |
exporting the DHCP and DNS configuration of a Windows2022 server Posted: 26 Dec 2021 11:09 PM PST Is there a way of exporting the complete DHCP and DNS configuration of a Windows2022 server, such that it can be migrated painlessly and installed 1:1 onto another server? I mean, not only Zones, A-records etc., but also forwarders, root hints, the whole big enchilada. |
Tips for managing safe and insecure PC mixes Posted: 26 Dec 2021 11:07 PM PST There are two buildings connected by an optical fiber, each with 3-10 PCs but in a mix of PCs used in a business context and some in homes. Unfortunately, when the network was designed years ago, no one took steps to keep the two environments separate. Building A There are servers, printers, work PCs, guest PCs, personal smartphones. Here is also the only Internet connection used by both buildings. It was solved with a firewall (pfsense) separating the office LAN and a network segment with a different Access Point for smartphones and guest PCs. There are two DHCPs to keep the two networks separate. I would like guests to be able to use the working printers on the LAN and I don't know if a rule in the firewall will suffice. Building B Here, too, there are printers, work PCs, smartphones and guest PCs. Here there are also homes with personal PCs, xBoxes, SmartTVs, etc. Only the DHCP of the LAN in Building A is used So with more security problems and therefore I need to separate the three environments (works, guests and homes) without modifying the network cabling. I don't know if it is enough to create VLANs, in any case to connect to the Internet or to the servers in Building A the traffic must always pass through the single fiber. It must also be taken into account that we have no management possibilities in the houses. There is a network point, but we don't know how it will be used; therefore the check must take place upstream of the network point. Here, however, I could mount a second pfsense firewall. Thanks in advance for any advice. |
Postfix transport map with failover entries? Posted: 26 Dec 2021 11:57 PM PST Postfix relays a few domains to a fixed IP address, or its failover address. 1.2.3.4 and 11.12.13.14 (failover) These two IP addresses are where the relay should occur. Currently, transport_map exemple1.com relay:[1.2.3.4]:587 exemple2.com relay:[1.2.3.4]:587 Is there a way, in case [1.2.3.4] times out, to use the failover automatically, like exemple1.com relay:[1.2.3.4]:587 exemple2.com relay:[1.2.3.4]:587 exemple1.com relay:[11.12.13.14]:587 exemple2.com relay:[11.12.13.14]:587 Would that work? (There is this question which is a tad different. Adding multiple IPs to /etc/hosts (or DNS if that was possible) would use either IP. In my case the failover should only be used if the main one times out) |
Private file storage area settings in Drupal 9.3 ($settings['file_private_path'] =) Posted: 26 Dec 2021 10:36 PM PST I use Drupal 9.3 and I have created a private folder outside the root directory. I followed the documentation [link] and added the following line in my settings.php file in drupal: $settings['file_private_path'] = '../private'; The private file exists outside of my web root directory as follows: /var/www/example.com/private with web root in/var/www/example.com/html . I am using nginx and I want to make sure that I have properly secured the private file. To do so, I added this block: location ^~ { internal; alias /var/www/example.com/private; } Is this correct and have I properly secured the private file/folder? The documentation mentions this: Note that non-Apache web servers may need additional configuration to secure private file directories. My complete Nginx virtual host (i.e., configuration file) is below: server { root /var/www/example.com/html; index index.html index.htm index.nginx-debian.html index.php; server_name example.com www.example.com; location / { try_files $uri $uri/ /index.php?$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; allow all; } location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ { expires max; log_not_found off; } # pass PHP scripts to FastCGI server # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php8.1-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location ^~ { internal; alias /var/www/example.com/private; } listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = example.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80 ; listen [::]:80 ; server_name example.com www.example.com; return 404; # managed by Certbot } |
HAProxy APIs - Reload Config Posted: 27 Dec 2021 12:22 AM PST I notice there are two APIs for HaProxy. DataPlane and Runtime. I don't see anything obvious that will just tell the system to reload and parse the .cfg file. Seems like a no brainer thing to want to do. I have a lets encrypt container running in docker and I want to just, via https, or some mechanism tell HaProxy to reload the ssl certs or in general load up a changed .cfg file. I'm aware of the individual SSL reload commands, which will implement if no other good solutions. Also of course I could extend the haproxy container to add some kind of custom api to kick it. Ps. I don't have enough traffic to warrant the zero packet loss yelp solution ;-) |
Providing Guest VM to connect to Internet through Host on ESXi with only one adapter Posted: 26 Dec 2021 09:50 PM PST I am new to virtualized environment. I have a bare-metal server provided by a hosting company which has ESXi 6.5 running on it. It has only one physical adapter, one Management Network and one VM Network, which has the virtual machine. I want to allow VM to access internet through the host connection. Will it be possible? I am planning to create a private LAN and make the current VM as a domain controller. Thanks. |
Why does dig not show the authority section and how to make it show the authoritative name servers that hold the DNS query`s answer? Posted: 26 Dec 2021 07:45 PM PST I started recently to learn about DNS, and I got stuck when using dig command in Linux. More exactly, I'd like to see the authoritative name servers (their names or IP addresses) that hold the answers to my DNS queries and I don't know how. As you probably already know, the dig's command output has 5 sections: HEADER, QUERY, ANSWER, AUTHORITY and ADDITIONAL. The last 3 include resource records found in the reply to the DNS request send by dig. The one that interests me is the AUTHORITY section which normally should show resource records of type NS (name server) that provide information about the authoritative name servers from which the answer to the initial query is retrieved. The authoritative servers are of course different from the cache servers that can improve efficiency. Now, my problem is that every time I call dig the answer doesn't contain any AUTHORITY records. It is possible I don't know the proper options or some other issue which I'm not aware of may occur. What could be the reasons for not getting any authority answer and what should be done in order to get it? I would put an image of the terminal but I don`t have yet 10 reputation, but the question remains. |
Is it possible to configure HTTPS for a computer that obtains a public network IP through a reverse proxy? Posted: 26 Dec 2021 06:05 PM PST Structure diagram I want to build Nextcloud on my own [computer (A)], and then use Joplin and other open source software to access [A] through web DAV on my [other devices (C)], thereby building my own private cloud service But in the area where I live, most devices do not have public network IP (with NAT), and the devices cannot communicate directly with each other, so I use a [VPS(B)] located on the Internet and use the reverse proxy program FRP to make [ A] can communicate with [C], but the protocol used in this process is web DAV, which is not encrypted by itself. I want it to be safer. So how can the computer behind the proxy server be set up for HTTPS? ([B] did not build HTTPS by itself) Can I proxy the port required for HTTPS from [B] to [A] so that [A] can obtain HTTPS? [A] [B] The operating system is Ubuntu |
How to find the process which does ping 8.8.8.8 on a synology? [closed] Posted: 26 Dec 2021 05:47 PM PST Just installed a new firewall in my (home) network and recognized that my Synology DSM does ping 8.8.8.8 every second. So there seems to be a running ping somewhere... I can't recall letting a ping run anywhere... Synology has a limited shell, some handy software (i.e. lsof) does not exist and additional software cannot be easily installed. Tcpdump does not show the icmp packets and netstat does not help either. I must assume that the machine pinging might be a docker container. I cannot attach to some of the containers. The ones I was able to do not show any sign of doing pings (although I do not know if docker attach would actually showing this at all). How would I find out which process/vm is responsible for those pings? I am interested in the strategy to find such issues. Thanks Dan |
POSTFIX / Grey List not working Posted: 26 Dec 2021 10:34 PM PST I had spam attack on my website. some one using my email to send spam email to everyone. So now i plan to find a way to stop it. I see people rejecting mails by regex. But i have tons and tons of different emails (50,000 users). I want to see if there is email in database then I allow it to go. Otherwise quarantine. Do not want them stuck in mailqueue. I have this greylist.pl: #main.cf smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, check_policy_service unix:/private/greylist #master.cf greylist unix - n n - - spawn user=nobody argv=/usr/bin/perl /tmp/mailrejct. But when I use it I get these errors in maillog: Dec 25 09:24:58 intelligent-mahavira postfix/spawn[107258]: warning: command /usr/bin/perl exit status 2 Dec 25 09:24:58 intelligent-mahavira postfix/smtpd[107253]: warning: premature end-of-input on /private/greylist while reading input attribute name Dec 25 09:24:59 intelligent-mahavira postfix/spawn[107258]: warning: command /usr/bin/perl exit status 2 Dec 25 09:24:59 intelligent-mahavira postfix/smtpd[107253]: warning: premature end-of-input on /private/greylist while reading input attribute name Dec 25 09:24:59 intelligent-mahavira postfix/smtpd[107253]: warning: problem talking to server /private/greylist: Connection reset by peer I replaced smtpd_access_policy with mine. That is one difference. Anyone expert in this. Who is sending attrib values? Postfix? how did it get passed? NOT SOLVED. This is only for recieving email. Only for Incoming spam. For outgoing email there is only Pattern checks. 1 #!/bin/sh 2 3 # Simple shell-based filter. It is meant to be invoked as follows: 4 # /path/to/script -f sender recipients... 5 6 # Localize these. The -G option does nothing before Postfix 2.3. 7 INSPECT_DIR=/var/spool/filter 8 SENDMAIL="/usr/sbin/sendmail -G -i" # NEVER NEVER NEVER use "-t" here. 9 10 # Exit codes from <sysexits.h> 11 EX_TEMPFAIL=75 12 EX_UNAVAILABLE=69 13 14 # Clean up when done or when aborting. 15 trap "rm -f in.$$" 0 1 2 3 15 16 17 # Start processing. 18 cd $INSPECT_DIR || { 19 echo $INSPECT_DIR does not exist; exit $EX_TEMPFAIL; } 20 21 cat >in.$$ || { 22 echo Cannot save mail to file; exit $EX_TEMPFAIL; } 23 24 # Specify your content filter here. 25 # filter <in.$$ || { 26 # echo Message content rejected; exit $EX_UNAVAILABLE; } 27 28 $SENDMAIL "$@" <in.$$ 29 30 exit $? Can the above converted to php code? I am wondering if i sendmail from php it gona come back to same place? exec("/usr/sbin/sendmail $email < /etc/postfix/myfilter/email.txt"); i need to put the email stuff into email.txt. WHen i put whole test it messup emails with details and stuff which shows all content and headers I need to clean up From alex@test.com Sun Dec 26 12:31:47 2021 Received: from webmail.test.com (localhost.localdomain [IPv6:::1]) by intelligent-mahavira.51-163-215-224.plesk.page (Postfix) with ESMTPSA id B9CFD82DA1 for <alex3@test.com>; Sun, 26 Dec 2021 12:31:47 +0000 (UTC) Authentication-Results: intelligent-mahavira.51-163-215-224.plesk.page; spf=pass (sender IP is ::1) smtp.mailfrom=alex@test.com smtp.helo=webmail.test.com Received-SPF: pass (intelligent-mahavira.51-163-215-224.plesk.page: connection is authenticated) MIME-Version: 1.0 Date: Sun, 26 Dec 2021 04:31:47 -0800 From: alex@test.com To: alex3 <alex3@test.com> Subject: testing filter2 User-Agent: Roundcube Webmail/1.4.11 Message-ID: <9f7fe8bf95404cf1ba2deb1a63a2e1ae@test.com> X-Sender: alex@test.com Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit X-PPP-Message-ID: <164052190789.24073.12166249882816501264@intelligent-mahavira.51-163-215-224.plesk.page> X-PPP-Vhost: test.com sss i saw this somehwere in google , will try tomorrow by using regex to filter out subject from content-type and body. To: customer@gmail.com Subject: This is an HTML message From: developer@yourcompany.com Content-Type: text/html; charset="utf8" <html> <body> <div style=" background-color: #abcdef; width: 300px; height: 300px; "> </div> You can add any valid email HTML here. </body> </html> - Greylisting - we can give access feedback in the form of "dunno" , "reject optional text".
- Simple filter no feedback. Filter has to handle sending email. Not clear how it is done. I basically capture all the STDIN and send it as email but it has all headers. Do not know how attachment handled. Need to see what it shows when i send attachment.
- Advanced filtering looks confusing. I see plex email security implements that. when i added it it added this to main.cf and master.cf
smtp inet n - n - - smtpd -o content_filter = smtp-amavis:[127.0.0.1]:10024 localhost:10025 inet n - n - - smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_delay_reject=no -o smtpd_authorized_xforward_hosts=127.0.0.0/8,[::1]/128 -o smtpd_authorized_xclient_hosts=127.0.0.0/8,[::1]/128 -o smtpd_client_restrictions=permit_mynetworks,reject -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o smtpd_data_restrictions=reject_unauth_pipelining -o smtpd_end_of_data_restrictions= -o smtpd_restriction_classes= -o mynetworks=127.0.0.0/8,[::1]/128 -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o smtpd_client_connection_count_limit=0 -o smtpd_client_connection_rate_limit=0 -o receive_override_options=no_unknown_recipient_checks,no_header_body_checks -o local_header_rewrite_clients= submission inet n - n - - smtpd -o syslog_name=postfix/submission -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_relay_restrictions=permit_sasl_authenticated,reject -o content_filter=smtp-amavis:[127.0.0.1]:10026 pickup unix n - n 60 1 pickup -o content_filter=smtp-amavis:[127.0.0.1]:10026 |
Completely Disable InnoDB in MariaDB 10.6 Posted: 26 Dec 2021 11:28 PM PST Not really sure whether to ask this here or at dba.stackexchange.com but I figured here might be more appropriate since it's a server config issue. I have a brand new MariaDB 10.6.5 install on Ubuntu 20.04 I've used MariaDB's own repositories that can be found at this URL - https://mariadb.org/download/?t=repo-config&d=20.04+%22focal%22&v=10.6&r_m=one Following the instructions from the link above everything installs fine but things start to get messy after that. I've got another machine with Ubuntu 20.04 and MariaDB 10.3.32 and things there are working fine, thanks to running these queries after install: ALTER TABLE mysql.innodb_index_stats CHANGE table_name table_name VARCHAR(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL; SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=MyISAM;') FROM information_schema.tables WHERE table_schema='mysql' AND engine='InnoDB'; and then adding the following lines in the [mysqld] section of /etc/mysql/my.cnf : skip-innodb default-storage-engine=MyISAM I've tried doing the same for the 10.6 install (by adding the 2 lines inside the [mysqld] section in /etc/mysql/mariadb.conf.d/50-server.cnf ) but for some reason InnoDB is still active and the default after I issue the SHOW ENGINES; query in the console. Another thing I've noticed is the new sys database which also uses InnoDB and I can't find info anywhere on how to get rid of it. After deleting the database I had some problems and had to reinstall everything. The reason I need this is mostly because of using cheap machines with 1GB RAM or less where MyISAM is still the king. //EDIT: Here's the output of my_print_defaults --mysqld --defaults-file=/etc/mysql/mariadb.cnf --socket=/run/mysqld/mysqld.sock --skip-innodb --default-storage-engine=MyISAM --user=mysql --pid-file=/run/mysqld/mysqld.pid --basedir=/usr --datadir=/var/lib/mysql --tmpdir=/tmp --lc-messages-dir=/usr/share/mysql --lc-messages=en_US --skip-external-locking --bind-address=127.0.0.1 --expire_logs_days=10 --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci Here's systemctl status mariadb.service mariadb.service - MariaDB 10.6.5 database server Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/mariadb.service.d └─migrated-from-my.cnf-settings.conf Active: active (running) since Mon 2021-12-27 08:13:02; 3min 57s ago Docs: man:mariadbd(8) https://mariadb.com/kb/en/library/systemd/ Process: 2884 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS) Process: 2889 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) Process: 2899 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (c> Process: 2924 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) Process: 2926 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS) Main PID: 2907 (mariadbd) Status: "Taking your SQL requests now..." Tasks: 10 (limit: 9451) Memory: 63.1M CGroup: /system.slice/mariadb.service └─2907 /usr/sbin/mariadbd Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 3 [Warning] Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 4 [Warning] Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: Looking for 'mysql' as: /usr/bin/mysql Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: Version check failed. Got the following error when calling the 'mysql' command line client Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: FATAL ERROR: Upgrade failed Dec 27 08:13:02 testhost /etc/mysql/debian-start[2941]: Checking for insecure root accounts. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 5 [Warning] Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost debian-start[2944]: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) And here's journalctl -u mariadb.service -n 30 Dec 27 08:13:01 testhost mariadbd[2907]: 2021-12-27 8:13:01 0 [Note] /usr/sbin/mariadbd (server 10.6.5-MariaDB-1:10.6.5+maria~focal) starting as process 2907 ... Dec 27 08:13:01 testhost mariadbd[2907]: 2021-12-27 8:13:01 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 Dec 27 08:13:01 testhost mariadbd[2907]: 2021-12-27 8:13:01 0 [Note] InnoDB: Number of pools: 1 Dec 27 08:13:01 testhost mariadbd[2907]: 2021-12-27 8:13:01 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: Using Linux native AIO Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728 Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: Completed initialization of buffer pool Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: 128 rollback segments are active. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: Creating shared tablespace for temporary tables Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: 10.6.5 started; log sequence number 33062; transaction id 4 Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] Plugin 'FEEDBACK' is disabled. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] InnoDB: Buffer pool(s) load completed at 211227 8:13:02 Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] Server socket created on IP: '0.0.0.0'. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] Server socket created on IP: '::'. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 0 [Note] /usr/sbin/mariadbd: ready for connections. Dec 27 08:13:02 testhost mariadbd[2907]: Version: '10.6.5-MariaDB-1:10.6.5+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution Dec 27 08:13:02 testhost systemd[1]: Started MariaDB 10.6.5 database server. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 3 [Warning] Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 4 [Warning] Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: Looking for 'mysql' as: /usr/bin/mysql Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: Version check failed. Got the following error when calling the 'mysql' command line client Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost /etc/mysql/debian-start[2931]: FATAL ERROR: Upgrade failed Dec 27 08:13:02 testhost /etc/mysql/debian-start[2941]: Checking for insecure root accounts. Dec 27 08:13:02 testhost mariadbd[2907]: 2021-12-27 8:13:02 5 [Warning] Access denied for user 'root'@'localhost' (using password: NO) Dec 27 08:13:02 testhost debian-start[2944]: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) |
Postfix After-Queue Content Filter full example Posted: 26 Dec 2021 08:00 PM PST i'm trying the Simple content filter example: i followed the steps mentioned here http://www.postfix.org/FILTER_README.html#simple_filter but in line 24 of the content filter that can be a simple shell script like this you need to specify your content filter my question is : is there any full example with a content filter ( line 24) that i can work with ? 1 #!/bin/sh 2 3 # Simple shell-based filter. It is meant to be invoked as follows: 4 # /path/to/script -f sender recipients... 5 6 # Localize these. The -G option does nothing before Postfix 2.3. 7 INSPECT_DIR=/var/spool/filter 8 SENDMAIL="/usr/sbin/sendmail -G -i" # NEVER NEVER NEVER use "-t" here. 9 10 # Exit codes from <sysexits.h> 11 EX_TEMPFAIL=75 12 EX_UNAVAILABLE=69 13 14 # Clean up when done or when aborting. 15 trap "rm -f in.$$" 0 1 2 3 15 16 17 # Start processing. 18 cd $INSPECT_DIR || { 19 echo $INSPECT_DIR does not exist; exit $EX_TEMPFAIL; } 20 21 cat >in.$$ || { 22 echo Cannot save mail to file; exit $EX_TEMPFAIL; } 23 24 # Specify your content filter here. 25 # filter <in.$$ || { 26 # echo Message content rejected; exit $EX_UNAVAILABLE; }** 27 28 $SENDMAIL "$@" <in.$$ 29 30 exit $? |
Get variables become inaccessible after htaccess rewrite Posted: 26 Dec 2021 07:08 PM PST I'm trying to work with SEO friendly urls and as a result am using htaccess to redirect to a more friendly destination. However, when I do these redirects, I am no longer able to access the get variables from the new url, even though I can get them when I go to the old one. I'm specifically looking at the "writer" line although I've included the rest for context. Any idea on how I can fix this issue? RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule !.*\.php$ %{REQUEST_FILENAME}.php [QSA,L] RewriteRule ^writer/([0-9a-zA-z_-]+) writer?user=$1 [QSA,L] RewriteRule ^article/([0-9a-zA-z_-]+)/([0-9a-zA-z_-]+) article?id=$1&title=$2 [QSA,L] |
Nginx Proxy to AWS ELB not passing HTTPS protocol to Backend Instances Posted: 26 Dec 2021 11:01 PM PST This is my first ever question, so please go easy on me! I'm trying to set up an Nginx proxy server to auto-generate SSL certificates using OpenResty/Lua and LetsEncrypt, within a multi-tenant SAAS platform. The proxy server is running and certificates are being issued fine. The Nginx config (via OpenResty) is passing off requests to my AWS Elastic (Classic) Load Balancer. The problem is that the instances behind my ELB do not seem to be receiving the HTTPS protocol, so the links in my websites' navigation, etc. are all HTTP and not HTTPS. For example, loading https://www.domain.com works, but clicking a link in the navigation shows http://www.domain.com/page.html Here is my OpenResty/nginx config on the proxy: http { lua_shared_dict auto_ssl 1m; lua_shared_dict auto_ssl_settings 64k; resolver 8.8.8.8 ipv6=off; init_by_lua_block { auto_ssl = (require "resty.auto-ssl").new() auto_ssl:set("allow_domain", function(domain) return true end) auto_ssl:init() } init_worker_by_lua_block { auto_ssl:init_worker() } server { listen 443 ssl; location / { proxy_pass http://AWS-ELB-URL-HERE; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } ssl_certificate_by_lua_block { auto_ssl:ssl_certificate() } ssl_certificate /etc/ssl/resty-auto-ssl-fallback.crt; ssl_certificate_key /etc/ssl/resty-auto-ssl-fallback.key; } server { listen 80; location /.well-known/acme-challenge/ { content_by_lua_block { auto_ssl:challenge_server() } } } server { listen 127.0.0.1:8999; client_body_buffer_size 128k; client_max_body_size 128k; location / { content_by_lua_block { auto_ssl:hook_server() } } } } In an attempt to try and determine if the issue is with my Rails app, I changed the Nginx config to point directly to the instance IP address instead of the ELB. In doing so, all links are https(!), which is what I want! So at this point, I believe the problem is either a) my Nginx config isn't passing the protocol properly, or b) my ELB is not passing the protocol to the backend instance. I'm sort of inclined to think that the ELB is the culprit, since everything works as expected when pointing the proxy to the instance IP directly. So, I have started looking at the ELB configuration and listeners, but have not yet been able to find a configuration that works. Here's what I have now: I have also tried changing it to: Load Balancer Protocol: HTTPS (Secure HTTP), Load Balancer Port: 443, Instance Protocol: HTTP, Instance Port: 80 But that didn't work either, and the links are still HTTP. I am now just guessing at what to do with regards to the Listeners and Ports, trying whichever configuration to see if it works or not. So far nothing. Does anyone have any insight into what the issue could be and how to fix it? TIA! |
Gluster and "failed to fetch volume file" Posted: 26 Dec 2021 11:01 PM PST I recently upgraded one of my gluster clients to a Debian stretch based system and am not able to mount any gluster volumes from it. My gluster server runs 3.4.2 on Ubuntu 14.04. The Stretch system is running some flavor of 3.8.x. The error I get is 0-mgmt: failed to fetch volume file (key:/sata_temp) Is this due to version incompatibility? After reinstalling, the client is still unable to mount volume ssd_temp . This looks like a blocked port perhaps as mentioned by @Spooler: (on client) # mount -t glusterfs 172.22.24.5:/ssd_temp ssd_temp/ Mount failed. Please check the log file for more details. (on server) # gluster volume status ssd_temp Status of volume: ssd_temp Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 172.22.24.5:/mnt/ssd_temp/brick 49163 Y 2936 NFS Server on localhost 2049 Y 2949 There are no active volume tasks # tail /var/log/glusterfs/bricks/mnt-ssd_temp-brick.log [2018-06-14 18:22:29.691196] E [rpcsvc.c:195:rpcsvc_program_actor] 0-rpc-service: RPC Program procedure not available for procedure 45 in GlusterFS 3.3 [2018-06-14 18:22:29.691236] E [rpcsvc.c:450:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully # tail /var/log/glusterfs/etc-glusterfs-glusterd.vol.log [2018-06-14 18:32:12.197131] E [rpcsvc.c:521:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request |
Encrypting existing blobs in classic Azure Storage account Posted: 26 Dec 2021 11:34 PM PST We have a few classic storage accounts in Azure with a number of files in blob storage that are not encrypted as they were uploaded before encryption was on by default for all uploads. Per https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption: Storage Service Encryption is enabled by default for all storage accounts--classic and Resource Manager, any existing files in the storage account created before encryption was enabled will retroactively get encrypted by a background encryption process. However, we are not seeing any of the old blobs be retroactively encrypted. Does the retroactive encryption of existing blobs only take place on ARM storage accounts? If so would migrating the storage accounts as described on https://docs.microsoft.com/en-us/azure/virtual-machines/windows/migration-classic-resource-manager-ps cause the old blobs to start encrypting in the background? |
Mysql cannot start - Could not find valid tablespace file for Posted: 26 Dec 2021 06:04 PM PST My mysql service is currently unable to start with the error InnoDB: Could not find a valid tablespace file for 'wowcher/temp_import'. I think I know what the issue is. I have a PHP script which imports CSV files into a temporary table before splitting information to the correct relevant tables. It would then wipe the temporary table at the end of the script, however for some reason it has been giving errors before completing and wiping the table. So Im pretty sure the temp table is huge right now. How can I wipe it when I cannot start the mysql service? or is it something else that is causing the problem? |
How to configure an alias in nginx Posted: 26 Dec 2021 10:05 PM PST Im switching from apache to nginx and im not sure how i would do the following in nginx. <VirtualHost *:80> ServerName example.com ProxyRequests On Alias /faq /var/www/http <Directory /var/www/http/> Options Indexes FollowSymLinks AllowOverride ALL Require all granted </Directory> ProxyPassMatch ^/faq ! ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ErrorLog ${APACHE_LOG_DIR}/http.log CustomLog ${APACHE_LOG_DIR}/http.log combined i currently have this but i keep getting a 404 error server { listen 80; server_name example.com; client_max_body_size 30M; location / { proxy_pass http://localhost:8080/; include /etc/nginx/proxy_params; } location /faq/ { proxy_redirect off; alias /var/www/http; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } } location ~ /faq\.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: fastcgi_pass unix:/var/run/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } |
SCCM - Find when an application deployed to a specific computer Posted: 26 Dec 2021 08:04 PM PST Is it possible to find when SCCM ran a deployment against a specific client PC, from the server? (CAS, primary or even secondary will do so long as I can script through to the data starting with only the PC name) This is an SCCM2012R2 environment. I have found the WMI class SMS_AppDeploymentAssetDetails, however the time returned is the time the deployment was associated with the collection. For example, on a machine task-sequenced only a few days ago: AppName : A Name CollectionID : 101010 CollectionName : OS-All Windows 7 Workstations DTCI : Numbers (deployment type ID - maybe there is a GUID for a specific deployment instance somehwere?) DTModelID : Numbers DTName : A softwares - Windows Installer (*.msi file) MachineName : CoolComp StartTime : 20140609040200.000000+*** StatusType : 1 Note the 2014 start time. Same issue on the backend database table vAppDeploymentAssetData. This does at least seem to indicate whether deployment on that specific machine was successful (there is an entry on this table for every single deployment event, per-workstation) So if anyone aware of a way to do find this time (so it would look something like this: DTCI : Numbers InstallTime : 20160708040200.000000+*** MachineName : CoolComp ) with WQL or SQL (or, some other method, maybe SCCM powershell cmdlets?) |
PHP5 unmet dependencies when upgrading Posted: 27 Dec 2021 12:05 AM PST Trying to upgrade PHP from 5.3 to 5.6 using the following: sudo add-apt-repository ppa:ondrej/php5-5.6 sudo apt-get update sudo apt-get install php5-common When I try this, I run into this error and not sure how to resolve: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libapache2-mod-php5 : Depends: apache2-api-20120211 Depends: apache2 (>= 2.4) Depends: php5-json php5-cli : Depends: php5-common (= 5.3.10-1ubuntu3.21) but 5.6.18+dfsg-1+deb.sury.org~precise+1 is to be installed php5-common : Breaks: php5-xdebug (< 2.2.2) but 2.1.0-1 is to be installed php5-curl : Depends: php5-common (= 5.3.10-1ubuntu3.21) but 5.6.18+dfsg-1+deb.sury.org~precise+1 is to be installed php5-gd : Depends: php5-common (= 5.3.10-1ubuntu3.21) but 5.6.18+dfsg-1+deb.sury.org~precise+1 is to be installed php5-intl : Depends: php5-common (= 5.3.10-1ubuntu3.21) but 5.6.18+dfsg-1+deb.sury.org~precise+1 is to be installed php5-mysqlnd : Depends: php5-common (= 5.3.10-1ubuntu3.21) but 5.6.18+dfsg-1+deb.sury.org~precise+1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). If I try to precede the upgrade by doing sudo apt-get install python-software-properties then I get this error: The following packages have unmet dependencies: libapache2-mod-php5 : Depends: apache2-api-20120211 Depends: apache2 (>= 2.4) Depends: php5-common (= 5.6.18+dfsg-1+deb.sury.org~precise+1) but 5.3.10-1ubuntu3.21 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). If I try sudo apt-get apache2 , I get this error: The following packages have unmet dependencies: apache2 : Depends: apache2-bin (= 2.4.16-4+deb.sury.org~precise+4) but it is not going to be installed Depends: apache2-utils (>= 2.4) Depends: apache2-data (= 2.4.16-4+deb.sury.org~precise+4) but it is not going to be installed Conflicts: apache2.2-bin but 2.2.22-1ubuntu1.10 is to be installed Conflicts: apache2.2-common but 2.2.22-1ubuntu1.10 is to be installed libapache2-mod-php5 : Depends: apache2-api-20120211 Depends: php5-common (= 5.6.18+dfsg-1+deb.sury.org~precise+1) but 5.3.10-1ubuntu3.21 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I tried doing sudo apt-get -f install as suggested, but does not resolve problem. Instead, I get this: The following extra packages will be installed: apache2 apache2-bin apache2-data apache2-utils dh-php5 libedit2 libgd3 libmemcached11 libvpx1 php5-cli php5-common php5-curl php5-dev php5-gd php5-intl php5-json php5-mcrypt php5-memcached php5-mysqlnd php5-xdebug pkg-php-tools Suggested packages: www-browser apache2-doc apache2-suexec-pristine apache2-suexec-custom libgd-tools php5-user-cache dh-make Recommended packages: php5-readline The following packages will be REMOVED: apache2-mpm-prefork apache2.2-bin apache2.2-common The following NEW packages will be installed: apache2-bin apache2-data dh-php5 libgd3 libmemcached11 libvpx1 php5-json pkg-php-tools The following packages will be upgraded: apache2 apache2-utils libedit2 php5-cli php5-common php5-curl php5-dev php5-gd php5-intl php5-mcrypt php5-memcached php5-mysqlnd php5-xdebug 13 upgraded, 8 newly installed, 3 to remove and 10 not upgraded. 8 not fully installed or removed. Need to get 0 B/7,060 kB of archives. After this operation, 3,685 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 149017 files and directories currently installed.) Unpacking apache2-bin (from .../apache2-bin_2.4.16-4+deb.sury.org~precise+4_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/apache2-bin_2.4.16-4+deb.sury.org~precise+4_amd64.deb (--unpack): trying to overwrite '/usr/share/man/man8/apache2.8.gz', which is also in package apache2.2-common 2.2.22-1ubuntu1.10 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/apache2-bin_2.4.16-4+deb.sury.org~precise+4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Also tried doing just sudo apt-get install php5 but get essentially the same error. What is the correct way to resolve these dependencies issues? |
Cisco SF500-24P Image Upgrade to 14088 fails Posted: 26 Dec 2021 09:02 PM PST I try to upgrade the system to 1.4.0.88 and it fails with error Status: Copy failed Error Message: Copy: SW code file is over sized I am using HTTP for the upgrade. Any suggestions? |
Identifying mailboxes with corruption Exchange 2013 Posted: 26 Dec 2021 10:05 PM PST We're preparing to rid ourselves of Exchange on-premise and are migrating from an outsourced hosting provider to go to Office 365. We're running a fully patched version of Exchange 2013, atop Server 2012, also fully patched. During the migration of test mailboxes we found that many mailboxes across multiple databases are corrupted. More information about the cause can be found HERE. Essentially: the SAN which stores our Exchange VM is oversubscribed and routinely has I/O wait in excess of 5 seconds, and sustained read speeds rarely pass 500KBps. The slow speeds would be enough to cause significant wasted time during the migration, but when mailboxes with corruption are encountered, migrating 1GB of data goes from a 2-3 hour affair to a 10-20 hour one. Each of the mailboxes that have issues (that I've found so far) give messages similar to the below when checked against get-mailboxstatistics: WARNING: The object <GUID> has been corrupted, and it's in an inconsistent state. The following validation errors happened: WARNING: Cannot extract the property value of 'DeletedItemCount'. Source: PropTag(DeletedMsgCount), PropType(Int), RawValue(-2), RawValueType(System.Int32). Target: Type(System.Nullable`1[System.UInt32]), IsMultiValued(False). Error Details: <n/a> Running New-MailboxRepairRequest against all databases identified some corruption and repaired it, but not all of it. I can't seem to find a way to get Get-MailboxStatistics to log the fact that there is something broken in each of these mailboxes, though I'm sure there is one. Moving mailboxes from one database to another seems to fix the problem. We've got ~50 DBs, and about 50 users per DB, so going through this manually is out. What I want to do is, via PowerShell (excuse lazy pseudo code please): foreach ($mailbox in $database){ if get-mailboxstatisics -eq $corrupted { move $mailbox to $otherdb wait move $mailbox back to $database} } However, I can't figure out how to catch the "Warning: this is broken" text from Get-MailboxStatistics, and the resultant object returned doesn't have anything in it that shows it's broken. Do I just need to catch the warning and assume everything that complains about inconsistencies can be fixed this way, then go back and check the list of mailboxes that actually have an issue after they've been moved out and back to see if things are still broken? Is there a better way to do what I need to do? Replacing the SAN is out of the realm of possibilities, so is fixing any other underlying causes. |
Unable to access 3ware 3dm2 web interface, failed to start listening socket Posted: 26 Dec 2021 08:04 PM PST I am unable to get to the 3ware 3dm2 web interface. When I attempt to execute 3dm2 from the command line, i get the following error "(0x0C:0x0005): Failed to start listening socket". This was working years ago, but now that I'm trying to access it for my new RAID install, I am unable to access the web interface. The service appears to be running, and I am not using any iptables firewall # netstat -tupl Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program tcp 0 0 *:888 *:* LISTEN 5191/3dm2 # nmap -sT 127.0.0.1 PORT STATE SERVICE 888/tcp open accessbuilder I am on 32-bit debian 6.0.9 (squeeze), with 9650se-4lpml and a 7506-4LP. The 3ware-3dm2- package was installed using aptitude and is at version 9.3.0.4-1duo1. Any help is appreciated. Hopefully it is something simple. |
Nagios Configuration Error Posted: 26 Dec 2021 09:02 PM PST Running /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg Does not give any error. The output is Things look okay - No serious problems were detected during the pre-flight check However when i try to do some configurations thru the xi interface, the configuration verification fails. Configuration submitted for processing... Waiting for configuration verification....... Configuration verification failed. Nagios is till able to monitor my services and hosts but i cant do any changes to the configuration using xi interface. I took a look at the configuration Snapshots and saw that there was 40 over errors. I am very puzzled as to why it doesn't show when i run the sanity check (plus it is able to monitor those hosts that produced the error). Another thing to note : i am able to restart nagios. - doesn't this prove that there isnt any error? thus it could restart normally? |
Allowing Domain Users to run winrm commands Posted: 26 Dec 2021 07:08 PM PST Currently i have a AD/Kerberos Configured on one EC2 instance(Windows 2008 R2) and created couple of users. Each of the users has administrator privileges. When We login as a non-domain Administrator, i can successfully execute the winrm commands. But when i login as the domain User (who has administrator privileges), i cannot run the winrm commands: C:\Users\domain-username>winrm get winrm/config/service/auth WSManFault Message = Access is denied. Error number: -2147024891 0x80070005 Access is denied. I check the Group Policy Editor for WinRM did not find anything relevant. I am not sure what i am missing. |
How to specify path in Apache tomcat catalina for folder on different server? Posted: 27 Dec 2021 12:05 AM PST On apache tomcat C:\Program Files\Apache Software Foundation\Tomcat 6.0\conf\Catalina\localhost\ we can change folders for another by specify it in XML like: THE_FOLDER.xml and in that xml we can change that THE_FOLDER path <Context path="/MF_PHRASES" reloadable="true" docBase="C:\Projects\Customers\test\phrases" workDir="C:\Projects\Customers\test\phrases" /> but it doesn't work for folder located on other server: <Context path="/MF_PHRASES" reloadable="true" docBase="\\192.168.0.100\c$\Projects\Customers\test\phrases" workDir="\\192.168.0.100\c$\Projects\Customers\test\phrases" /> not working Tried 2 methods: 1) mapping through network drive, got the error: SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base Z:\Projects\Customers\test\phrases does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(FileDirContext.java:142) at org.apache.catalina.core.StandardContext.resourcesStart(StandardContext.java:4319) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4488) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:799) 2) file://192.168.0.100/c$/Projects/Customers/test/phrases SEVERE: Error deploying configuration descriptor MF_PHRASES_NEW.xml java.io.IOException: The filename, directory name, or volume label syntax is incorrect at java.io.WinNTFileSystem.canonicalize0(Native Method) at java.io.Win32FileSystem.canonicalize(Unknown Source) at java.io.File.getCanonicalPath(Unknown Source) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:658) at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:601) is there any way to make it work? |
MDT 2010 Litetouch.vbs Fails to Launch Posted: 26 Dec 2021 06:04 PM PST I have the custom image captured. Import the image and files. Prepare the customsettings.ini and the boot.ini to minimize the questions the deployment team will need to answer. Everything works like a charm on virtual machines but when I map to the scripts folder on the deployment share and double-click litetouch.vbs it creates the c:\minint folder, subfolders, and a couple of log files then nothing. Here's what the log files look like: <![LOG[Property LogPath is now = C:\MININT\SMSOSD\OSDLOGS]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property CleanStart is now = ]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Microsoft Deployment Toolkit version: 5.1.1642.01]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property Debug is now = FALSE]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[GetAllFixedDrives(False)]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> Anyone encounter this before or know what might be happening/not happening and can direct me in the right way? I've only found a couple of other references to this anywhere and they had no solution/cause listed either. I'm stumped. |
No comments:
Post a Comment