Friday, May 6, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Postfix LDAP aliases - user unknown

Posted: 06 May 2022 12:07 AM PDT

I have server POSTFIX + DOVECOT with LDAP (ActiveDirectory) authorization.

/etc/postfix/ldap_virtual_mailbox_maps.cf

query_filter = (&(objectClass=person)(mail=%s))  result_filter = %s  result_attribute = mail  

/etc/postfix/ldap_virtual_alias_maps.cf

query_filter = (&(objectClass=person)(othermailbox=%s))  result_attribute = othermailbox  

/etc/dovecot/dovecot-ldap.conf.ext

pass_filter = (&(objectCategory=Person)(sAMAccountName=%n))  user_filter = (&(objectCategory=Person)(sAMAccountName=%n))  

The email specified in the attribute "mail" works (users can send and receive messages). I specify an alias in the attribute "othermailbox" (for example - s15@domain.com).

    # postmap -q s15@domain.com ldap:/etc/postfix/ldap_virtual_alias_maps.cf      s15@domain.com  

When I send an email to this address (s15@domain.com) then get "Undelivered Mail Returned to Sender"

The mail system s15@domain.com: user unknown

What am I doing wrong ? I will be grateful for comments.

Nginx + Apache Max web apps possible

Posted: 05 May 2022 11:54 PM PDT

I am going to provide hosting to a lot of companies, but these sites will be short lived, mostly used for temporary site creation purposes.

So I will be hosting almost 200 websites on a single 2 vCPU + 4 GB RAM type of server.

The stack is nginx + apache with PHP + MySQL based web apps.

My question is : how many websites such setup can handle theoretically and does it depend on the size of the server, lets say if I increase the size of the server (CPU & RAM) can I host more websites?

Getting could not connect to server: Connection refused Is the server running on host "localhost" and accepting TCP/IP connections on port 5432

Posted: 06 May 2022 12:24 AM PDT

I have Postgres 13 and pgAdmin installed on my Windows machine. It worked for like ~six month, until I hat do restart my machine. After restarting the computer (which I had not done because there was an update), I am trying to connect to Postgres via pgAdmin as always and I get this

could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?

Some answeres suggest, to edit the postgresql.conf file which I a) do not find in my postgres installation and b) I think is not the correct solution for me because it worked without any problem until I restarted my machine.

Why is that? It feels like postgres is not starting? Although this is a wild guess... I went to Control Panel->Administrator Tools->Services and found out Postgres' service name which is postgresql-x64-13 - PostgreSQL Server 13. When I try

runas /user:Administrator cmd  

and then

net start postgresql-x64-13 - PostgreSQL Server 13  

I get

System Error 5. Access Denied.

This is driving me nuts...

midnight commanger hide dir changes from internal history

Posted: 05 May 2022 11:40 PM PDT

I have a small gripe with mc lately. Don't remember seeing this before and it is quite annoying. I usually open mc and then Ctrl+o to go to the shell. Do some stuff, Ctrl+o back, change to another dir and then when I go back to the shell and press the Up arrow to get to a previous command, I have to get past the dir changes commands that mc has run in the backgroud.

So the question is: can I hide these somehow from mc internal shell history?

Cheers

Can I add "firewall rules" to an AWS VPN connection?

Posted: 05 May 2022 10:53 PM PDT

i need to connect a couple of customers to an AWS VPC via VPN. requirements:

  • no customer may send data (or best: even "see") another customer
  • they should only be able to "see" exactly one internal host, preferably only a certain port range.

my question is - is this possible with an AWS VPN gateway & VPN connection? and if yes, how?

cause i read a ton of stuff now and google quite a lot, and i did not find any way to assign security groups (or something alike) to an AWS VPN connection. in my book that means "any site-2-site connection allows all traffic", which is the opposite of what i need.

can anybody help me here?

thanks in advance for any information! :)

┌───────────────────┬─────────────────┐                    ┌──────────┐  │subnet 1           │         subnet 2│                    │          │  │ ┌──────────┐      │                 │   ┌────────────────┤customer 1│  │ │          │      │must be possible │   │                │          │  │ │server 1  │◄─────┼────┐            │   ▼                └──────────┘  │ │          │      │    │   ip: ┌────┴─────┐ip:                ▲  │ └──────────┘      │    │   int1│    .     │public             │  │                   │    ├───────┤vpn gw    │                   │ must also  │ ┌──────────┐      │    │       │    .     │                 XXX not be  │ │          │      │    │       └────┬─────┘                   │ possible  │ │server 2  │◄─XXX─┤XXX─┘            │   ▲                     │  │ │          │      │must not be      │   │                ┌────┴─────┐  │ └──────────┘      │possible         │   │                │          │  │                   │                 │   └────────────────┤customer 2│  │                   │                 │                    │          │  └───────────────────┴─────────────────┘                    └──────────┘  

Why can push mp4 into a rtmp stream instead of rtsp stream?

Posted: 05 May 2022 10:47 PM PDT

I build stream server with nginx,and can push the mp4 into a rtmp stream with :

ffmpeg -re -i /mnt/hls/m7.mp4  -vcodec libx264 -vprofile baseline -g 30 -acodec aac -strict -2 -f flv rtmp://127.0.0.1/live  

Now i want to push it with rtsp format:

ffmpeg -re  -i /mnt/hls/m7.mp4  -f rtsp -rtsp_transport tcp rtsp://127.0.0.1/live  

Error info encoutered:

[tcp @ 0x55c7d6a157c0] Connection to tcp://127.0.0.1:554?timeout=0 failed: Connection refused Could not write header for output file #0 (incorrect codec parameters ?): Connection refused Error initializing output stream 0:0 -- [aac @ 0x55c7d65b6500] Qavg: nan [aac @ 0x55c7d65b6500] 1 frames left in the queue on closing Conversion failed!

Bash cripts in samba domain during login

Posted: 05 May 2022 10:08 PM PDT

I have samba as pdc with linux machines clients. i created logond script and modified samba config but when logging on nothing happens please help me in google there is only examples of logon scripts with .cmd and .bat ... smb.conf modifications [general] logon script = script.sh and in /va/samba i created script.sh

[netlogon] path = /var/calculate/server-data/samba/netlogon browseable = no read only = yes root preexec = /usr/lib/calculate/calculate-server/bin/execserv -s --makedir %U

is this enought?

Why might an end-user IP address be different when accessing different but co-hosted websites?

Posted: 05 May 2022 11:46 PM PDT

I am trying to understand the following observation.

We have two domain names, domain1.example and domain2.example. At a DNS level, there's an A record to an anycast address. Both domains resolve to the same address.

When the same user makes an HTTPS Web request to domain1.example and domain2.example, the user's IP address (per access log) is not consistent across the two domains but is consistent for each domain. In most cases, other users have identical IP addresses in both logs.

From a pure networking point of view, the packets should be routed using the same entry in the routing table since they are going to the same IP address. It seems something higher-level in the OSI stack is domain-aware and able to alter the pathway.

What might be interfering here?

Chromium based browsers do not remember cookies on a domain joined server

Posted: 06 May 2022 12:40 AM PDT

We have a problem that Chromium based browsers do not remember cookies on a domain joined server. Can any one point us in the right direction?

What did we do: We freshly installed the server (2019 or 2022) from VLSC iso. Create a local user and immediately login as that user. Setup proxy in the browser and see if it remembers cookies when we browse to a site. Cookies are stored and we do not get an annoying popup for cookie preferences when we visit a site. ^^^so this step is wanted behavior.

In AD we create a new Test OU with no policies and disable inheritance. We put the computername in the Test OU. Installed RDS role. We make the server member of AD and it is in the Test OU. We check the server with "gpresult /h /test.html" and RSOP.msc if any policies are loaded and this is not the case. We login to the server with a domain account (regular or DA does not matter for end result) and browse to the same site we used as a local test user. We get a popup to set our cookie preferences. When we close the browser and go back to the site we get the prompt again. <-- unwanted behavior

When we check if the cookies file is changed this is not the case. It is 20KB and does not change. If we remove the file and start the browser the file is recreated and has the same size but it is never updated after.

Cookie location: C:\Users<username>\AppData\Local\Microsoft\Edge\User Data\Default\Network

We have tested and can replicate this behavior in Chromium based browsers: Chrome Enterprise, Edge

Firefox is ok.

Can't install squid on debian: Job for squid.service failed because a timeout was exceeded

Posted: 05 May 2022 10:08 PM PDT

I'm trying to install squid but apt can't finish installation, and returns these errors:

After this operation, 7,263 kB of additional disk space will be used.  Selecting previously unselected package squid.  (Reading database ... 156110 files and directories currently installed.)  Preparing to unpack .../squid_3.5.23-5+deb9u1_amd64.deb ...  Unpacking squid (3.5.23-5+deb9u1) ...  Setting up squid (3.5.23-5+deb9u1) ...  Setcap worked! /usr/lib/squid/pinger is not suid!  Job for squid.service failed because a timeout was exceeded.  See "systemctl status squid.service" and "journalctl -xe" for details.  invoke-rc.d: initscript squid, action "restart" failed.  ● squid.service - LSB: Squid HTTP Proxy version 3.x     Loaded: loaded (/etc/init.d/squid; generated; vendor preset: enabled)     Active: failed (Result: timeout) since Wed 2020-04-22 11:54:36 CDT; 7ms ago       Docs: man:systemd-sysv-generator(8)    Process: 1888 ExecStart=/etc/init.d/squid start (code=exited, status=0/SUCCESS)      Tasks: 24 (limit: 4915)     CGroup: /system.slice/squid.service             ├─  338 /usr/sbin/squid -YC -f /etc/squid/squid.conf             ├─  340 (squid-1) -YC -f /etc/squid/squid.conf             ├─  341 (logfile-daemon) /var/log/squid/access.log             ├─  342 (pinger)             ├─  950 /usr/sbin/squid -YC -f /etc/squid/squid.conf             ├─  952 (squid-1) -YC -f /etc/squid/squid.conf             ├─  953 (logfile-daemon) /var/log/squid/access.log             ├─  954 (pinger)             ├─ 1926 /usr/sbin/squid -YC -f /etc/squid/squid.conf             ├─ 1928 (squid-1) -YC -f /etc/squid/squid.conf             ├─ 1929 (logfile-daemon) /var/log/squid/access.log             ├─ 1930 (pinger)             ├─31261 /usr/sbin/squid -YC -f /etc/squid/squid.conf             ├─31263 (squid-1) -YC -f /etc/squid/squid.conf             ├─31264 (logfile-daemon) /var/log/squid/access.log             ├─31265 (pinger)             ├─31597 /usr/sbin/squid -YC -f /etc/squid/squid.conf             ├─31599 (squid-1) -YC -f /etc/squid/squid.conf             ├─31600 (logfile-daemon) /var/log/squid/access.log             ├─31601 (pinger)             ├─31949 /usr/sbin/squid -YC -f /etc/squid/squid.conf             ├─31951 (squid-1) -YC -f /etc/squid/squid.conf             ├─31952 (logfile-daemon) /var/log/squid/access.log             └─31953 (pinger)    Apr 22 11:49:36 backgroundserver systemd[1]: Starting LSB: Squid HTTP Proxy version 3.x...  Apr 22 11:49:36 backgroundserver squid[1926]: Squid Parent: will start 1 kids  Apr 22 11:49:36 backgroundserver squid[1888]: Starting Squid HTTP Proxy: squid.  Apr 22 11:49:36 backgroundserver systemd[1]: squid.service: PID file /var/run/squid.pid not readable (yet?) after start: No such file or directory  Apr 22 11:49:36 backgroundserver squid[1926]: Squid Parent: (squid-1) process 1928 started  Apr 22 11:54:36 backgroundserver systemd[1]: squid.service: Start operation timed out. Terminating.  Apr 22 11:54:36 backgroundserver systemd[1]: Failed to start LSB: Squid HTTP Proxy version 3.x.  Apr 22 11:54:36 backgroundserver systemd[1]: squid.service: Unit entered failed state.  Apr 22 11:54:36 backgroundserver systemd[1]: squid.service: Failed with result 'timeout'.  dpkg: error processing package squid (--configure):   subprocess installed post-installation script returned error exit status 1  Processing triggers for systemd (232-25+deb9u12) ...  Processing triggers for man-db (2.7.6.1-2) ...  Errors were encountered while processing:   squid  E: Sub-process /usr/bin/dpkg returned an error code (1)  

I don't find what to do.

cannot read clients from nas table in freeradius only from clients.conf

Posted: 06 May 2022 01:05 AM PDT

I have installed freeradius on Centos.

The MySQL database is populated with some data for testing, and the freeradiusd.conf and sql.conf are configured.

The RADIUS server is able to connect with the MySQL database, and I can authenticate users from it. I also have a remote RADIUS client configured that is working with my captive portal and RADIUS server, however, it only works when I have the client's IP address configured in /etc/raddb/clients.conf. It does not work using the MySQL 'nas' table.

In other words, freeradius does not seem to be querying my nas table from the MySQL database.

In my /etc/raddb/mods-enabled/sql file I have following:

# Table to keep radius client info  nas_table = "nas"    # Set to 'yes' to read radius clients from the database ('nas' table)  read_clients = yes  

In my nas table I have following:

id       nasname       shortname       type       ports       secret  server      community       description  1       xx.xx.xx.xx       NULL       other       NULL       testing123   default      NULL       RADIUS Client  

... where xx.xx.xx.xx is the correct IP address of my RADIUS client.

When I try to log in via the captive portal, with freeradius running in debug mode, I get the following:

Wed Aug  8 06:39:11 2018 : Info: Ready to process requests  Wed Aug  8 06:39:19 2018 : Error: Ignoring request to auth address * port 1812 bound to server default from unknown client xx.xx.xx.xx port 55546 proto udp  Wed Aug  8 06:39:19 2018 : Info: Ready to process requests  Wed Aug  8 06:39:21 2018 : Error: Ignoring request to auth address * port 1812 bound to server default from unknown client xx.xx.xx.xx port 55546 proto udp  Wed Aug  8 06:39:21 2018 : Info: Ready to process requests  Wed Aug  8 06:39:24 2018 : Error: Ignoring request to auth address * port 1812 bound to server default from unknown client xx.xx.xx.xx port 55546 proto udp  Wed Aug  8 06:39:24 2018 : Info: Ready to process requests  eWed Aug  8 06:39:28 2018 : Error: Ignoring request to auth address * port 1812 bound to server default from unknown client xx.xx.xx.xx port 55546 proto udp  

I noticed in the debug output that the data in nas are being loaded

Wed Aug  8 09:07:58 2018 : Debug: rlm_sql (sql): Reserved connection (0)  Wed Aug  8 09:07:58 2018 : Debug: rlm_sql (sql): Executing select query: SELECT id, nasname, shortname, type, secret, server FROM nas  Wed Aug  8 09:07:58 2018 : Debug: rlm_sql (sql): Adding client xx.xx.xx.xx (xx.xx.xx.xx) to default clients list  Wed Aug  8 09:07:58 2018 : Debug: Adding client xx.xx.xx.xx/32 (xx.xx.xx.xx) to prefix tree 32  Wed Aug  8 09:07:58 2018 : Debug: rlm_sql (xx.xx.xx.xx): Client "xx.xx.xx.xx" (sql) added  Wed Aug  8 09:07:58 2018 : Debug: rlm_sql (sql): Released connection (0)  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "pap" from file /etc/raddb/mods-enabled/pap  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "reject" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "fail" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "ok" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "handled" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "invalid" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "userlock" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "notfound" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "noop" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "updated" from file /etc/raddb/mods-enabled/always  Wed Aug  8 09:07:58 2018 : Debug:   # Instantiating module "monthlycounter" from file /etc/raddb/mods-enabled/sqlcounter  

Any help would be greatly apprciated! PS: I tried changing the shortname in the nas table the same as the ip but it still didn't work

Using FirstLogonCommands in an Unattend.xml file

Posted: 06 May 2022 12:02 AM PDT

I apologize ahead of time for what is probably a stupid question, but I'm having a hard time figuring this out from the Microsoft Documentation (https://docs.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-shell-setup-firstlogoncommands):

If I populate my Unattend.xml file with the 'FirstLogonCommands' setting at the oobeSystem pass, will the commands run once for the first user that logs into the machine, or will the command run once for each user that logs into the machine?

Zevenet Load Balancer - SSL Certificate

Posted: 05 May 2022 11:46 PM PDT

I am really new to this so please be nice :)

I am wondering if anyone has any experience with Zevenet Load Balancers.

I have setup the community version (V4). I have 2 web servers with replicated content, I have a virtual IP setup in the system which points to the 2 IP addresses of the web servers. The load balancer works for HTTP and HTTPS traffic correctly but shows an cert error when trying to get to the servers via HTTPS.

I want to combat this by adding a certificate to the load balancer. To do this I have followed the these steps:

https://www.zevenet.com/knowledge-base/howtos/manage-certificates-with-zen-load-balancer/ (ignoring the bit about purchasing a cert from SofIntel as we use JISC for our Certs)

Basically I created a certificate in the load balancer, generated the CSR, purchased a certificate from JISC by uploading the CSR generated from the load balancer.

I then downloaded the ZIP file from JISC which contains the crt for the domain, as well as the root certificates required also in the ZIP.

I tried uploading the ZIP to the load balancer and it pops up an error showing that the certificate needs to be created in a PEM format.

I then found this here:

https://www.zevenet.com/knowledge-base/howtos/create-certificates-pem-format/

I am not really sure what this is asking me to do....does this mean the original CSR that I generated is irrelevant now? The instructions on the above link say that the PEM file needs to be the following:

-----BEGIN RSA PRIVATE KEY----- Private Key (without passphrase) -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Certificate (CN=www.mydomain.com) -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Intermediate (Intermediate CA, if exists) -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Root (ROOT CA, who signs the Certificate) -----END CERTIFICATE-----

Essentially I already have the domain certificate, the intermediate and the root all from JISC now. But is there any way can go about getting the Private key from the load balancer so that I can just create the PEM file manually.

Sorry if this seems like a really stupid question, pretty new to cert stuff, not sure why it wont just let me upload the zip file.

Kafka compatible with Zookeeper 3.5 feature 'Rebalancing Client Connections'

Posted: 05 May 2022 10:08 PM PDT

In this document https://zookeeper.apache.org/doc/trunk/zookeeperReconfig.html dynamic configuration functionality is described for Zookeeper 3.5.

There are 2 important points in this document:

  1. When changing the dynamic config of one Zookeeper instance, all Zookeeper instances in the ensemble automatically get their configs updated.
  2. Clients of the Zookeeper ensemble can rebalance their connections when the dynamic config gets updated, provided they subscribe to /zookeeper/config in Zookeeper, or alternatively call getConfig, and update their own list of Zookeeper servers by calling updateServerList

This all seems really promising, because at the moment (Kafka 2.12 and Zookeeper 3.4.9), both Zookeeper and Kafka configurations are static, and when a Zookeeper node needs to get replaced, config changes need to be made on each Zookeeper instance in the ensemble and on each Kafka broker, and all participants need to be restarted to reload configs.

My question is, provided that you go with Zookeeper 3.5 and it's new dynamic reconfiguration, is there a Kafka version that is compatible with this, which will update it's own zookeeper.connect configuration when the Zookeeper ensemble gets reconfigured?

NTP offset warning when no NTP server is defined and NTP not running?

Posted: 05 May 2022 11:03 PM PDT

I just want to preface by saying that I am still learning linux and don't have too much experience with it.

My job requires me to monitor an alert system for our clients hosts that are running our product.

I just received an alert regarding NTP that confused me a bit so I was hoping someone here can help me clarify it.

The alert was for the NTP offset of a particular host. That's fine, just go and resync to the NTP server. However, turns out that there is no NTP server defined in the config file and is not even running when I go to stop it:

"ntpd: unrecognised service"  

However, this check has been running for a while and only alerted today.

So my question is, if there is no NTP server defined for that host and NTP is not even running, what triggered the alert? I mean, if the alert is only supposed to go off when the offset is large, how can there be an offset if there is no server defined to be compared to?

The alert even specifies an exact time in seconds, and it's updating. So It's comparing itself to something, right?

I've tried to look online for an answer but nothing is clicking with me. Any help would be great.

Deploying an SSIS Package - TNS:could not resolve the connect identifier specified

Posted: 05 May 2022 11:03 PM PDT

I have an SSIS package which has 4 connections - a WebService, 2 SQL Server connections (across 2 domains), and 1 connection to an Oracle DB.

When the package is run from Visual Studio from a laptop, it runs OK. When I deploy it to the server, I'm getting 'TNS:could not resolve the connect identifier specified'. Results of tnsping's:

64 Bit Version of TNSPing

C:\oracle\product\10.2.0\client_2\BIN>tnsping myservice.name    TNS Ping Utility for 64-bit Windows: Version 10.2.0.4.0 - Production on 05-SEP-2  013 11:06:10    Copyright (c) 1997,  2007, Oracle.  All rights reserved.    Used parameter files:   c:\oracle\product\10.2.0\client_2\network\admin\sqlnet.ora  Used TNSNAMES adapter to resolve the alias  Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)      (HOST = ww.x.y.zzz)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = myservice.name))  )  OK (0 msec)  

32 Bit Version of TNSPing

C:\oracle\product\10.2.0\client_1\BIN>tnsping myservice.name    TNS Ping Utility for 32-bit Windows: Version 10.2.0.4.0 - Production on 05-SEP-2  013 11:06:20    Copyright (c) 1997,  2007, Oracle.  All rights reserved.    Used parameter files:  c:\oracle\product\10.2.0\client_1\network\admin\sqlnet.ora  Used TNSNAMES adapter to resolve the alias  Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)      (HOST = ww.x.y.zzz)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = myservice.name))      )  OK (0 msec)  

Now a similar experiment using DTSWizard.exe.

Using Oracle Provider for OLE DB. Get same results with Microsoft OLE DB Provider for Oracle.

Please see this image - I've not enough rep to post the image here..!

(I've also tried using C:\PROGRAM~2\Microsoft SQL Server\110\DTS\Binn\DTSWizard.exe - no joy.)

The laptop is 64-bit, and has Oracle 11.2.0 installed.
The server is 64-bit, and has Oracle 10.2.0 installed.

My understanding is that SQL Server Management Studio is 32-bit only - could this be where the problem lies?

Could anybody suggest where I could go from here? I've tried various connectors, none of which seem to make a blind bit of difference. The only other option I can think of is taking the drivers from the server, putting them on the Laptop, re-configuring the SSIS package to work with those, and then deploy it again -- but, I'm hoping to avoid that, if there's an easier way?

`mysql_upgrade` is failing with no real reason given

Posted: 06 May 2022 01:00 AM PDT

I'm upgrading from MySQL 5.1 to 5.5, running mysql_upgrade and getting this output:

# mysql_upgrade  Looking for 'mysql' as: mysql  Looking for 'mysqlcheck' as: mysqlcheck  FATAL ERROR: Upgrade failed  

Any ideas on where to look for what's happening (or, not happening?) so I can fix whatever is wrong and actually run mysql_upgrade?

Thanks!

More output:

# mysql_upgrade --verbose  Looking for 'mysql' as: mysql  Looking for 'mysqlcheck' as: mysqlcheck  FATAL ERROR: Upgrade failed    # mysql_upgrade --debug-check --debug-info  Looking for 'mysql' as: mysql  Looking for 'mysqlcheck' as: mysqlcheck  FATAL ERROR: Upgrade failed    # mysql_upgrade --debug-info  Looking for 'mysql' as: mysql  Looking for 'mysqlcheck' as: mysqlcheck  FATAL ERROR: Upgrade failed    User time 0.00, System time 0.00  Maximum resident set size 1260, Integral resident set size 0  Non-physical pagefaults 447, Physical pagefaults 0, Swaps 0  Blocks in 0 out 16, Messages in 0 out 0, Signals 0  Voluntary context switches 9, Involuntary context switches 5    # mysql_upgrade --debug-check  Looking for 'mysql' as: mysql  Looking for 'mysqlcheck' as: mysqlcheck  FATAL ERROR: Upgrade failed  

After shutting down mysqld --skip-grant-tables via mysqladmin shutdown and restarting mysql via service mysql start, the error log loops through this set of errors over and over:

130730 21:03:27 [Note] Plugin 'FEDERATED' is disabled.  /usr/sbin/mysqld: Table 'mysql.plugin' doesn't exist  130730 21:03:27 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.  130730 21:03:27 InnoDB: The InnoDB memory heap is disabled  130730 21:03:27 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130730 21:03:27 InnoDB: Compressed tables use zlib 1.2.3.4  130730 21:03:27 InnoDB: Initializing buffer pool, size = 20.0G  130730 21:03:29 InnoDB: Completed initialization of buffer pool  130730 21:03:30 InnoDB: highest supported file format is Barracuda.  InnoDB: Log scan progressed past the checkpoint lsn 588190222435  130730 21:03:30  InnoDB: Database was not shut down normally!  InnoDB: Starting crash recovery.  InnoDB: Reading tablespace information from the .ibd files...  InnoDB: Restoring possible half-written data pages from the doublewrite  InnoDB: buffer...  InnoDB: Doing recovery: scanned up to log sequence number 588192055067  130730 21:03:30  InnoDB: Starting an apply batch of log records to the database...  InnoDB: Progress in percents: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99   InnoDB: Apply batch completed  InnoDB: Last MySQL binlog file position 0 81298895, file name /var/log/mysql/mysql-bin.006008  130730 21:03:33  InnoDB: Waiting for the background threads to start  130730 21:03:34 InnoDB: 5.5.32 started; log sequence number 588192055067  130730 21:03:34 [Note] Recovering after a crash using /var/log/mysql/mysql-bin  130730 21:03:34 [Note] Starting crash recovery...  130730 21:03:34 [Note] Crash recovery finished.  130730 21:03:34 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306  130730 21:03:34 [Note]   - '0.0.0.0' resolves to '0.0.0.0';  130730 21:03:34 [Note] Server socket created on IP: '0.0.0.0'.  130730 21:03:34 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist  

MySQL log during start up via mysqld_safe --skip-grant-tables

130730 21:19:36 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  130730 21:19:36 [Note] Plugin 'FEDERATED' is disabled.  130730 21:19:36 InnoDB: The InnoDB memory heap is disabled  130730 21:19:36 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130730 21:19:36 InnoDB: Compressed tables use zlib 1.2.3.4  130730 21:19:37 InnoDB: Initializing buffer pool, size = 20.0G  130730 21:19:39 InnoDB: Completed initialization of buffer pool  130730 21:19:39 InnoDB: highest supported file format is Barracuda.  130730 21:19:42  InnoDB: Warning: allocated tablespace 566, old maximum was 0  130730 21:19:42  InnoDB: Waiting for the background threads to start  130730 21:19:43 InnoDB: 5.5.32 started; log sequence number 588192055067  130730 21:19:43 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306  130730 21:19:43 [Note]   - '0.0.0.0' resolves to '0.0.0.0';  130730 21:19:43 [Note] Server socket created on IP: '0.0.0.0'.  130730 21:19:43 [Warning] Can't open and lock time zone table: Table 'mysql.time_zone_leap_second' doesn't exist trying to live without them  130730 21:19:43 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist  130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'threads' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure  130730 21:19:43 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure  130730 21:19:43 [Note] /usr/sbin/mysqld: ready for connections.  Version: '5.5.32-0ubuntu0.12.04.1-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Ubuntu)  

As I understand it, all the table structure/existence issues (as it relates to mysql system tables) should be corrected by running mysql_upgrade :

Varnish not showing custom headers

Posted: 06 May 2022 12:02 AM PDT

In my Varnish 3 configuration (default.vcl) I configured the following to pass along information via the response headers:

sub vcl_deliver {      if (obj.hits > 0) {          set resp.http.X-Cache = "HIT";          set resp.http.X-Cache-Hits = obj.hits;      } else {         set resp.http.X-Cache = "MISS";      }      set resp.http.X-Cache-Expires = resp.http.Expires;      set resp.http.X-Test = "LOL";        # remove Varnish/proxy header      remove resp.http.X-Varnish;      remove resp.http.Via;      remove resp.http.Age;      remove resp.http.X-Purge-URL;      remove resp.http.X-Purge-Host;      remove resp.http.X-Powered-By;  }  

And yet the only thing I can see is

HTTP/1.1 200 OK  Vary: Accept-Encoding  Content-Encoding: gzip  Content-Type: text/html  Content-Length: 8492  Accept-Ranges: bytes  Date: Tue, 05 Feb 2013 10:11:02 GMT  Connection: keep-alive  

It doesn't show any headers that we have added inside the vcl_deliver method.

EDIT: This is my vcl_fetch method:

sub vcl_fetch {      unset beresp.http.Server;      unset beresp.http.Etag;      remove req.http.X-Forwarded-For;      set req.http.X-Forwarded-For = req.http.rlnclientipaddr;      set beresp.http.X-Wut = "YAY";        if (req.url ~ "^/w00tw00t") {          error 750 "Moved Temporarily";      }        # allow static files to be cached for 7 days      # with a grace period of 1 day      if (req.url ~ "\.(png|gif|jpeg|jpg|ico|swf|css|js)$") {          set beresp.ttl = 7d;          set beresp.grace = 1d;          return(deliver);      }        # cache everythig else for 1 hours      set beresp.ttl = 1h;        # grace period of 1 day      set beresp.grace = 1d;        return(deliver);  }  

Anyone got an idea how to solve this as NO custom headers are included in the response headers... As you can see above in my vcl_fetch method I add several custom response headers but none of they are showing.

IIS 7 virtual directory 404 error

Posted: 06 May 2022 01:05 AM PDT

I have set up a virtual directory called application under the default website. Inside this, i have a web applcation running. When i browse 80 from IIS and log into the homepage is fine, but then when i click onto go to another menu (subdirectory) i keep getting 404. I have created the necessary virtual directories and checked permissions on the folder and app pool. the iis log shows the following

2012-09-04 09:54:08 ::1 GET /application/TeamCentral/Common/images/20/h_row.jpg - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 64  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/Images/risk_32x32.png - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 26  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/Survey/Images/survey_32x32.png - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 28  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/common/images/20/logout.gif - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 35  2012-09-04 09:54:08 ::1 GET /application/TeamCentral/favicon.ico - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 200 0 0 17  2012-09-04 09:54:08 ::1 GET /application/images/arrowdown.gif - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 404 0 2 2  *****2012-09-04 09:54:15 ::1 GET /TeamCentral/Auditors/HomePage.aspx - 80 - ::1 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.83+Safari/537.1 404 0 0 2141*****  

Its the last line thats the problem.

Templating with Linux in a Shell Script?

Posted: 06 May 2022 12:32 AM PDT

what I want to acomplish is:

1.) Having a config file as template, with variables like $version $path (for example apache config)

2.) Having a shell script that "fills in" the variables of the template and writes the generated file to disk.

Is this possible with a shell script. I would be very thankfull if you can name some commands/tools I can accomplish this or some good links.

No comments:

Post a Comment