Monday, November 8, 2021

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


why do I have ssl error while doing a conda install or simply run anaconda virtual environment?

Posted: 08 Nov 2021 08:13 AM PST

I am trying to use anaconda on my windows laptop, got the ssl error, tried to "conda config --set ssl_verify false" and it makes no difference, any help or advise would be helpful. $ C:\ProgramData\Anaconda3\Scripts\conda-script.py install -c conda-forge gdal`

Traceback (most recent call last):    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\exceptions.py", line 573, in conda_exception_handler      return_value = func(*args, **kwargs)    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\cli\main.py", line 134, in _main      exit_code = args.func(args, p)    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\cli\main_install.py", line 80, in execute      install(args, parser, 'install')    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\cli\install.py", line 222, in install      unknown=index_args['unknown'], prefix=prefix)    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\index.py", line 92, in get_index      index = fetch_index(channel_priority_map, use_cache=use_cache)    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\index.py", line 111, in fetch_index      repodatas = collect_all_repodata(use_cache, tasks)    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\repodata.py", line 74, in collect_all_repodata      repodatas = _collect_repodatas_serial(use_cache, tasks)    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\repodata.py", line 463, in _collect_repodatas_serial      for url, schan, pri in tasks]    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\repodata.py", line 463, in <listcomp>      for url, schan, pri in tasks]    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\repodata.py", line 109, in func      res = f(*args, **kwargs)    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\repodata.py", line 442, in fetch_repodata      mod_etag_headers.get('_mod'))    File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\repodata.py", line 138, in fetch_repodata_remote_request      timeout=timeout)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 501, in get      return self.request('GET', url, **kwargs)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 488, in request      resp = self.send(prep, **send_kwargs)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 609, in send      r = adapter.send(request, **kwargs)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\adapters.py", line 423, in send      timeout=timeout    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 594, in urlopen      chunked=chunked)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 350, in _make_request      self._validate_conn(conn)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 835, in _validate_conn      conn.connect()    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\connection.py", line 323, in connect      ssl_context=context)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\util\ssl_.py", line 308, in ssl_wrap_socket      context.load_verify_locations(ca_certs, ca_cert_dir)    File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\contrib\pyopenssl.py", line 392, in load_verify_locations      self._ctx.load_verify_locations(cafile, capath)    File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\SSL.py", line 525, in load_verify_locations      _raise_current_error()    File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\_util.py", line 48, in exception_from_error_queue      raise exception_type(errors)  OpenSSL.SSL.Error: [('system library', 'fopen', 'Unknown error'), ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]  

json nested values to list through for loops

Posted: 08 Nov 2021 08:12 AM PST

I am trying to create a backup table//list of values based on a json file. This file contains column names and some with nested values. I have a for loop that should go through each column name based on an initialized list of variables(which have the same names as the columns in the file) then if they match change the empty data value [] to the new one found. However! Some have nested keys which I cannot access unless i intialize them manually. Is there any way to create a for loop that can go through each key and look for a match?

I had another idea to create a list setting the variables equal to the manual path way but still using the for loop to find the match and add the data but i am still not sure

i can create an example table of data:

[user][email][cat@gmail.com], [occupation], [startdate],

variablelist= [email], [occupation],[startdate]

say this is the data i have in my file, i would like the for loop to go in and look through each column name and the nested key values then pull out the match.

sorry if it doesnt make a lot of sense i am new to using forums and programming!

django rest framework return image url

Posted: 08 Nov 2021 08:12 AM PST

I am trying to uplaod an image to the API and get the URL of the images back in response:

Here is my views.py

class ImageViewSet(viewsets.ModelViewSet):      queryset = edited_image.objects.all()      serializer_class = ImageSerializer    @api_view(["POST"])  def manipulate_image(request):              greenChannel, blueChannel= manip(request.data["picture"])          image = Image.objects.create(green=ContentFile(greenChannel.tobytes()),                                   blue=ContentFile(blueChannel.tobytes()),                                   )        return Response(data=image, status=status.HTTP_201_CREATED)  

TypeError: Object of type Image is not JSON serializable

My model is:

# Create your models here.  class edited_image(models.Model):      green= models.ImageField(upload_to="green_image", blank=True)      blue= models.ImageField(upload_to="blue_image", blank=True)  

Subtotal after discount before tax

Posted: 08 Nov 2021 08:12 AM PST

I need to post the generated amount after applying the coupon and before taxes are applied and show it in the cart page. That is the subtotal minus the discount value, as indicated in the attached image.total before discount thank you

centering submenu in html & css

Posted: 08 Nov 2021 08:13 AM PST

I am trying to make this menu centred so the menu doesn't look like this. I tried messaging around with flex-boxes but that appears to not change what the menu looks like.

ul.submenu {    display: none;    background-color: lightgoldenrodyellow;  }
<ul class="submenu">    <li><a href="#">Business Information Systems</a></li>    <li><a href="#">Graphic Communications</a></li>    <li><a href="#">Interactive Design and Technology</a></li>  </ul>

SQLSTATE=08001 error trying to connect to db2 remote db node.js

Posted: 08 Nov 2021 08:12 AM PST

I am running my application as I have been for the past YEAR. today, db2 decided to give me this error:

 =https  2021-11-08T16:07:47.262433+00:00 app[web.1]: [Error: [IBM][CLI Driver] SQL30081N  A communication error has been detected. Communication protocol being used: "TCP/IP".  Communication API being used: "SOCKETS".  Location where the error was detected: "".  Communication function detecting the error: "selectForConnectTimeout".  Protocol specific error code(s): "115", "*", "*".  SQLSTATE=08001  2021-11-08T16:07:47.262443+00:00 app[web.1]: ] {  2021-11-08T16:07:47.262445+00:00 app[web.1]: error: '[node-ibm_db] SQL_ERROR',  2021-11-08T16:07:47.262445+00:00 app[web.1]: sqlcode: -30081,  2021-11-08T16:07:47.262446+00:00 app[web.1]: state: '08001'  2021-11-08T16:07:47.262446+00:00 app[web.1]: }  

I am running my login function, which the query looks like this:

    var userLogin = "select * from login where USERNAME = ?";    ibmdb.open(ibmdbconnMaster, function (err, conn) {      if (err) return console.log(err);      conn.query(userLogin, [inputUsername], function (err, rows) {        if (err) {          console.log(err);        }  

any chance anyone can help?

Multiple cells AutoFilter

Posted: 08 Nov 2021 08:12 AM PST

I have a code that selects non empty cells in column C. Now If I want to select these cells in my autofilter it only pics the first found value of OutRng. How do i fix this?

Sub SelectNonBlankCells()    Sheets("Rekenblad").Select    Dim Rng As Range  Dim OutRng As Range  Dim xTitle As String  SearchCol = "10"    On Error Resume Next    xTitle = Range("C:C")  Set InputRng = Range("C:C")  For Each Rng In InputRng  If Not Rng.Value = "" Then  If OutRng Is Nothing Then  Set OutRng = Rng  Else  Set OutRng = Application.Union(OutRng, Rng)  End If  End If  Next  If Not (OutRng Is Nothing) Then    OutRng.Copy        Sheets("Plakken").Select      ActiveSheet.Range("$A$1:$K$13").AutoFilter Field:=10, Criteria1:=Array(OutRng) _          , Operator:=xlFilterValues    End If  End Sub  

Cannot Redirect to the correct page using PHP session

Posted: 08 Nov 2021 08:12 AM PST

In my application I have 4 roles. As per the roles, Users can view their access different pages. Currently I am maintaining 2 different sessions. $_SESSION['user_id'] and $_SESSION['role_id'].

My problem is, suppose we say the role_id =1 is Admin, role_id=2 is Doctor. If I logout from this session as Admin and logging as a Doctor to the site. Then I have click back button in web browser, it will redirect to the Admin page. I don't want to redirect to the Admin page, just I want to remain in the same page.

Here I have used this codes to redirect but no working.

 //  redirect to main page according to the user role        if(isset($_SESSION['user_id']) && $_SESSION['role_id'] == 1){//admin          header("location: index-2.php");          exit;       }         if(isset($_SESSION['user_id']) && $_SESSION['role_id'] == 2){ // doctor          header("location: index.php");          exit;       }              if(isset($_SESSION['user_id']) && $_SESSION['role_id'] == 3){// lab          header("location: index-lab.php");          exit;       }         if(isset($_SESSION['user_id']) && $_SESSION['role_id'] == 4){// recep          header("location: index-recep.php");          exit;       }  

I don't know where I went wrong. Could someone help me. Any help may highly appreciated.

How i make property for my method in vba?

Posted: 08 Nov 2021 08:12 AM PST

For example:

UseVariable("money").[and appears propertys for i use]  

how i make this in vba?

Is there a way to replace the character value of " - " (including the brackets " ") with just the NULL expression in the column of a dataframe?

Posted: 08 Nov 2021 08:13 AM PST

So far, I have used this:

nyc_data$SALE.PRICE <- gsub(" - ", "" , as.character(nyc_data$SALE.PRICE))

However, this only replaces the first argument with the second argument. So the output is "". For instance:

Output before: " - " Output after: ""

Does anyone have any suggestions? Unfortunately, I am not permitted to share any of the data.

iOS distribution certifacte expires - Is a new certificate created automatically?

Posted: 08 Nov 2021 08:12 AM PST

I have just uploaded a new version of my app to App Store Connect using Xcode. During the upload process I was notified, that my Distribution Certificate will expire in about a week.

I found dozens of questions about what will happen when a distribution certificate expires: Nothing will change on existing apps, but one cannot upload updates or new apps without a new certificate.

However, I found no information on how to get a new certificate. In my project I selected the option that Xcode automatically handles certificates and signing but does this include the Distribution Certificate or just the app specific certificates?

So, do I have to create a new Distribution Certificate manually or will this be done automatically by Xcode?

EDIT:

I also found many answers explaining how to create new certificates. However, all these answers are years old and refer to Xcode versions which did not offer automatic signing. So the question is, if it is still necessary manually (re-)create a distribution certificate in 2021 using Xcode 13 or if this something from the past :-)

Calling PCEnhancerTask from Kotlin in Gradle

Posted: 08 Nov 2021 08:13 AM PST

I need to call the OpenJPA PCEnhancerTask class from Kotlin instead of Groovy. The following code works just fine (based on a previous solution documented here):

def openJPAClosure = {      def entityFiles = sourceSets.main.output.classesDirs.asFileTree.matching {          include 'com/company/persist/*Entity.class'      }      println "Enhancing with OpenJPA:"      entityFiles.getFiles().each {          println it      }      ant.taskdef(              name : 'openjpac',              classpath : sourceSets.main.runtimeClasspath.asPath,              classname : 'org.apache.openjpa.ant.PCEnhancerTask'      )      ant.openjpac(              classpath: sourceSets.main.runtimeClasspath.asPath,              addDefaultConstructor: false,              enforcePropertyRestrictions: true) {          entityFiles.addToAntBuilder(ant, 'fileset', FileCollection.AntType.FileSet)      }  }  

I was looking at the documentation on how to call Ant tasks from Gradle but I could not translate all the necessary steps using the GroovyBuilder. So instead I tough of calling the PCEnhancer directly:

fun openJPAEnrich() {      val entityFiles = sourceSets.main.get().output.classesDirs.asFileTree.matching {          include("com/company/persist/*Entity.class")      }      println("Enhancing with OpenJPA, the following files...")      entityFiles.getFiles().forEach() {          println(it)      }      org.apache.openjpa.ant.PCEnhancerTask.main(asList(entityFiles))  }  

But it complains about not being able to find org.apache.openjpa in the classpath (but is it listed as a compilation dependency)

My questions are:

  • What is the correct way to translate the original Groovy construct to Kotlin using groovyBuilder
  • If is not possible, how you can correctly call PCEnhancer from Kotlin in Gradle?

How to create a bar plot with an x-variable with three levels and an unrelated dichotomous y-variable in proportions [duplicate]

Posted: 08 Nov 2021 08:13 AM PST

Example Table:

Obesity and/or Diabetes status Death occurred
0 1
2 1
1 0
2 0
2 0
1 1
0 0

I want to create a bar graph where the x-variable is the obesity and/or diabetes status and where there will be three adjacent bars for 0, 1, and 2. I want the y-variable to be the proportion of individuals in each x category that have died. The code that I attempted to run is as follows:

install.packages("qdap", "dplyr", "tidyr", "ggplot2")  install.packages("igraph", type = "binary")  install.packages("gender", type = "binary")  library(qdap)  library(dplyr)  library(tidyr)  library(ggplot2)    `FramFinal2017(1)`$category <- rownames(`FramFinal2017(1)`)    df <- `FramFinal2017(1)` %>%        gather(results, DEATH, - category) %>%        mutate(results = char2end(results, "_", 2)) %>%        separate(category, c("category", "Response"))    ggplot(data = df, aes(x = category, y = DEATH, fill = Response)) +      geom_bar(stat = "identity", position = "stack") +      facet_grid(. ~ results)  

But the code keeps spitting out errors related to the packages themselves, such as :

Error: package or namespace load failed for 'qdap' in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]): namespace 'dplyr' 0.8.3 is already loaded, but >= 0.8.5 is required  

Please let me know if there is another way to do this problem or if you need more clarification.

Generate edge list from from nested dictionary

Posted: 08 Nov 2021 08:13 AM PST

Below is the tree as a dictionary:

output_dict = { 'Sort': [ { 'Aggregate': [ { 'Hash Join': [{ 'Hash Join': [ { 'Sequential Scan': [ ] }, { 'Hash': [ { 'Sequential Scan': [ ] } ] } ] },{ 'Hash': [ { 'Sequential Scan': [ ] } ] } ] } ] } ] }  

I would like to transform the above data into a networkx compatible format. Preferably in the edge list format.

I.E.

('Sort', 'Aggregrate')  ('Aggregrate', 'Hash Join'))  

I have tried using a BFS template but with no success Appreciate any help given thanks

How can unions store mutliple values in one memory address in C?

Posted: 08 Nov 2021 08:12 AM PST

The union keyword in C confuses me. From what I have read, unions are used to store different types of data at the same memory address. How can one memory address store different amounts of data. For example,

union Data {     int i;     float f;     char str[20];  };  

The memory occupied from this struct is 20 bytes. How can the int, float and char array hold values all at once in the same memory address?

Java Program - How to run the total package contains different public multiple classes?

Posted: 08 Nov 2021 08:12 AM PST

Created a Package named -Choice.com contains 3 files which are headed with same package Choice.com first - shopApp public class which contains main method second - Customer public class third - Clothing public class

Compiled the whole package using javac *.java

(three files are saved in Choice.com folder)

compiled successfully but unable to run it

throwing error java.lang.ClassNotFoundException

Julia Vector: Excessive Memory Usage

Posted: 08 Nov 2021 08:11 AM PST

I want to brute force an 64-bit RSA encrypted text using a meet-in-the-middle attack (This is for university, nothing malicious).

To do this, I essentially created a Julia vector with 2^34 BigInt values and broadcasted the powermod() method on it to replace the values with the results.

v = powermod.(collect(1:2^34), e, n)  

n in this case is 1024-bits long which should theoretically result in a vector of 2^34 * 1024 bits size plus overhead. However, if I try to create a smaller vector (E.g., 2^20) it will already allocate 4GB of memory.

const e = 65537  const n = 146524179203462820907751077702895222709717245613911342138636679265720963659264803540209990978140003809112749926543448691815554807130673470903067642157383639213843567573216381956709789503739105865173848988830139432801516289108538638198344024523424071181688467967187076534718264943427915623567859427045475866239    @time begin      v = (powermod.(collect(1:2^24), e, n))  end  

Output of @time:

125.598926 seconds (117.44 M allocations: 4.000 GiB, 5.35% gc time)  

Not sure what and if I am doing something wrong here. Any help would be appreciated..

How to Sum (MAX values) from different value groups in same column SQL Server

Posted: 08 Nov 2021 08:12 AM PST

I have a table like this:

Date Consec_Days
2015-01-01 1
2015-01-03 1
2015-01-06 1
2015-01-07 2
2015-01-09 1
2015-01-12 1
2015-01-13 2
2015-01-14 3
2015-01-17 1

I need to Sum the max value (days) for each of the consecutive groupings where Consec_Days are > 1. So the correct result would be 5 days.

Python Keyring throws error "OSError: [WinError 8] Not enough memory resources are available to process this command"

Posted: 08 Nov 2021 08:11 AM PST

I'm using python 3.10 and the following code works once after startup, but then if I try and set any new password to the keyring, the following errors are thrown.

import keyring  keyring.set_password("test","a","b")        Traceback (most recent call last):    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\win32ctypes\pywin32\pywintypes.py", line 35, in pywin32error      yield    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\win32ctypes\pywin32\win32cred.py", line 37, in CredWrite      _authentication._CredWrite(c_pcreds, 0)    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\win32ctypes\core\cffi\_authentication.py", line 152, in _CredWrite      return check_zero(    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\win32ctypes\core\cffi\_util.py", line 81, in __call__      self._raise_error(function_name)    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\win32ctypes\core\cffi\_util.py", line 92, in _raise_error      raise exception  OSError: [WinError 8] Not enough memory resources are available to process this command    During handling of the above exception, another exception occurred:    Traceback (most recent call last):    File "D:\Directive Athena\Projects\A.D.A.M\Packages\Athena_Encryption\Encryption.py", line 2, in <module>      keyring.set_password("test","a","b")    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\keyring\core.py", line 60, in set_password      get_keyring().set_password(service_name, username, password)    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\keyring\backends\Windows.py", line 131, in set_password      self._set_password(service, username, str(password))    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\keyring\backends\Windows.py", line 142, in _set_password      win32cred.CredWrite(credential, 0)    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\win32ctypes\pywin32\win32cred.py", line 36, in CredWrite      with _pywin32error():    File "C:\Program Files\Python310\lib\contextlib.py", line 153, in __exit__      self.gen.throw(typ, value, traceback)    File "D:\Directive Athena\Projects\A.D.A.M\PyVenv\venv\lib\site-packages\win32ctypes\pywin32\pywintypes.py", line 37, in pywin32error      raise error(exception.winerror, exception.function, exception.strerror)  win32ctypes.pywin32.pywintypes.error: (8, 'CredWrite', 'Not enough memory resources are available to process this command')    Process finished with exit code 1  

I have tried restarting my pc, and as mentioned this then only wants execute once, after which it fails if I try and set a new key. Reinstalled python, and went back to 3.9, same issues persist

Jenkins warnings-ng-plugin custom folder structure AKA multiple "Source Directory"

Posted: 08 Nov 2021 08:12 AM PST

I migrated from the old obsolete FindBugs plugin to the warnings-ng-plugin Sprotbugs tool. Everything works great except sources. The plugin doesn't show the source code of Java files (my workspace structure is: Project1/src; Project2/src + Ant freestyle build).

I can point warnings-ng-plugin to any of these projects using a relative path (i.e. provide it in "Advance" -> "Source Directory" as let's say "Project1/src").

The path "Project1/src:Project2/src" leads to error "No such directory: Project1/src:Project2/src".

The path "/prefix/Project1/src:/prefix/Project2/src" leads to errors:

  1. Additional source directory '/prefix/Project1/src' must be registered in Jenkins system configuration
  2. Additional source directory '/prefix/Project2/src' must be registered in Jenkins system configuration

The path "*/src" leads to the error "Wildcard is not allowed here"

Any ideas on how to fix that situation?

Use global variable in Hotkey Block (Autohotkey)

Posted: 08 Nov 2021 08:12 AM PST

My Autohotkey script doesn't work. Global Variables seem to have a null value in the "^1" block. How do I declare global variables that I can use in multiple hotkey blocks?

#if (true)      global allActsRowY := 76      global act1X := 249        global allTownsY := 133      global allTownsX := 245            global clickDelay := 30                  ;Act 1 WP Town Shortcut      ^1::      {          Click, left, %act1X%, %allActsRowY%          Sleep, clickDelay          Click, left, %allTownsX%, %allTownsY%      }      return  #if  

Edit: The global variables were declared in the if-block. I made an edit to the code.

How do I serve files statically but with authentication?

Posted: 08 Nov 2021 08:12 AM PST

This is my folder structure:

/app    server.js      /public      index.html      script-a.js        /scripts        script-b.js  

These are the relevant contents of my index.html:

<!-- index.html -->  .....  ...  <script scr="/script-a.js"></script>  <script scr="/script-b.js"></script>  

There are the relevant contents of server.js:

import express                          from 'express'  import session                          from 'express-session'  import NedbStore                        from 'nedb'  import passport                         from 'passport'  import { Strategy as FacebookStrategy } from 'passport-facebook'  import { ensureLoggedIn }               from 'connect-ensure-login'  ....  ..  const db  = new NedbStore({ filename: `${__dirname}/db/nedb.db`, autoload: true })  const DB  = dbAsyncWrapper(db)    app.use(cors({     origin: '*',     credentials: true,     optionSuccessStatus: 200  }))    app.use(function(req, res, next) {    res.header('Access-Control-Allow-Origin', 'http://localhost:4200')    res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept, Authorization')    res.header('Access-Control-Allow-Methods', 'POST, GET')    next()  })    app.use(cookieParser())  app.use(express.json({ limit: '50mb' }))  app.use(express.urlencoded({ extended: true, limit: '50mb' }))    app.use(session({    secret: 'googoogaga',    resave: false,    saveUninitialized: false  }))  app.use(passport.initialize())  app.use(passport.session())    passport.use(new FacebookStrategy({    clientID      : FACEBOOK_APP_ID,    clientSecret  : FACEBOOK_APP_SECRET,    callbackURL   : "http://localhost:4200/facebook/callback",    profileFields : ['id']  }, async(accessToken, refreshToken, profile, done) => {    let facebookId = profile.id    let userInDb =  await DB.getUser()    if (userInDb && userInDb.facebookId === facebookId) {      await DB.updateUser({ accessToken })      done(null, userInDb)    } else {      let newUser = await DB.updateUser({ facebookId, accessToken })      done(null, newUser)    }  }))    passport.serializeUser(function(user, done) {    done(null, user)  })    passport.deserializeUser(function(user, done) {    done(null, user)  })    // app.use(express.static(__dirname + '/public'))    app.get('/auth/facebook', passport.authenticate('facebook', { scope:'email' }))    app.get('/facebook/callback', passport.authenticate('facebook', {    successRedirect : '/',    failureRedirect : '/auth/facebook'  }))    app.get('/', ensureLoggedIn('/auth/facebook'), (req,res) => {    res.sendFile(__dirname + '/public/index.html')  })    app.get('/setup', ensureLoggedIn('/auth/facebook'), ...)  
  1. When the user visits / they are redirected to Facebook to log in and then back to /.
  2. Once back on /, index.html is then served to the user
  3. At that point script-a.js & script-b.js will try to load but fail with a 404 Not Found.

This makes sense as I haven't enabled static serving for the contents of /public.

So I then try to enable it by reinstating this line:

app.use(express.static(__dirname + '/public'))  

When I do that, when the user vistis / they are not asked to log in anymore, all the scripts from /public load and then one of them will make a GET to /setup. When this happens the following error is thrown on the browser:

Access to fetch at 'https://www.facebook.com/v3.2/dialog/oauth?......'  (redirected from 'http://localhost:4200/setup')   from origin 'http://localhost:4200' has   been blocked by CORS policy:   No 'Access-Control-Allow-Origin' header   is present on the requested resource.   If an opaque response serves your needs,   set the request's mode to 'no-cors' to   fetch the resource with CORS disabled.  

This is because the user was never asked to authenticate in the first place (when they visited /) so now I'm assuming authentication fails for the /setup route handler.

Since I want to activate static serving for all contents of /public declaring a route handler for / should be needed either. What I should end up eventually is something like:

....  ...  app.use(ensureLoggedIn('/auth/facebook'), express.static(__dirname + '/public'))    app.get('/auth/facebook', passport.authenticate('facebook', { scope:'email' }))    app.get('/facebook/callback', passport.authenticate('facebook', {    successRedirect : '/',    failureRedirect : '/auth/facebook'  }))    app.get('/setup', ensureLoggedIn('/auth/facebook'), ....)  

However, the above does not work.

Select N number of highest values including duplicates in MySQL

Posted: 08 Nov 2021 08:12 AM PST

If I have a table like,

Names Marks
Liam 50
Jade 55
John 55
Pern 60
Lopa 20

I want to find the top 3 marks but if there are ties I want to include them and the number of people can be more than 3. So this table will become,

Names Marks
Pern 60
Jade 55
John 55
Liam 50

Any suggestions how to take care of the ties part?

How to predict a certain time span into the future with recurrent neural networks in Keras

Posted: 08 Nov 2021 08:11 AM PST

I have the following code for time series predictions with RNNs and I would like to know whether for the testing I predict one day in advance:

# -*- coding: utf-8 -*-  """  Time Series Prediction with  RNN    """  import pandas as pd  import numpy as np  from tensorflow import keras      #%%  Configure parameters    epochs = 5  batch_size = 50    steps_backwards = int(1* 4 * 24)  steps_forward = int(1* 4 * 24)    split_fraction_trainingData = 0.70  split_fraction_validatinData = 0.90      #%%  "Reading the data"    dataset = pd.read_csv('C:/User1/Desktop/TestValues.csv', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime':[0]}, index_col=['datetime'])    df = dataset  data = df.values  indexWithYLabelsInData = 0  data_X = data[:, 0:2]  data_Y = data[:, indexWithYLabelsInData].reshape(-1, 1)    #%%   Prepare the input data for the RNN    series_reshaped_X =  np.array([data_X[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])  series_reshaped_Y =  np.array([data_Y[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])      timeslot_x_train_end = int(len(series_reshaped_X)* split_fraction_trainingData)  timeslot_x_valid_end = int(len(series_reshaped_X)* split_fraction_validatinData)    X_train = series_reshaped_X[:timeslot_x_train_end, :steps_backwards]   X_valid = series_reshaped_X[timeslot_x_train_end:timeslot_x_valid_end, :steps_backwards]   X_test = series_reshaped_X[timeslot_x_valid_end:, :steps_backwards]       indexWithYLabelsInSeriesReshapedY = 0  lengthOfTheYData = len(data_Y)-steps_backwards -steps_forward  Y = np.empty((lengthOfTheYData, steps_backwards, steps_forward))    for step_ahead in range(1, steps_forward + 1):          Y[..., step_ahead - 1] =   series_reshaped_Y[..., step_ahead:step_ahead + steps_backwards, indexWithYLabelsInSeriesReshapedY]     Y_train = Y[:timeslot_x_train_end]   Y_valid = Y[timeslot_x_train_end:timeslot_x_valid_end]   Y_test = Y[timeslot_x_valid_end:]      #%%  Build the model and train it    model = keras.models.Sequential([      keras.layers.SimpleRNN(90, return_sequences=True, input_shape=[None, 2]),      keras.layers.SimpleRNN(60, return_sequences=True),      keras.layers.TimeDistributed(keras.layers.Dense(steps_forward))      #keras.layers.Dense(steps_forward)  ])    model.compile(loss="mean_squared_error", optimizer="adam", metrics=['mean_absolute_percentage_error'])  history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,                      validation_data=(X_valid, Y_valid))      #%%    #Predict the test data  Y_pred = model.predict(X_test)    prediction_lastValues_list=[]    for i in range (0, len(Y_pred)):    prediction_lastValues_list.append((Y_pred[i][0][steps_forward-1]))    #%% Create thw dataframe for the whole data    wholeDataFrameWithPrediciton = pd.DataFrame((X_test[:,0]))  wholeDataFrameWithPrediciton.rename(columns = {indexWithYLabelsInData:'actual'}, inplace = True)  wholeDataFrameWithPrediciton.rename(columns = {1:'Feature 1'}, inplace = True)  wholeDataFrameWithPrediciton['predictions'] = prediction_lastValues_list  wholeDataFrameWithPrediciton['difference'] = (wholeDataFrameWithPrediciton['predictions'] - wholeDataFrameWithPrediciton['actual']).abs()  wholeDataFrameWithPrediciton['difference_percentage'] = ((wholeDataFrameWithPrediciton['difference'])/(wholeDataFrameWithPrediciton['actual']))*100  

I define eps_forward = int(1* 4 * 24) which is basically one full day (in 15 minutes resolution which makes 1 * 4 *24 = 96 time stamps). I predict the test data by using Y_pred = model.predict(X_test) and I create a list with the predicted values by using for i in range (0, len(Y_pred)): prediction_lastValues_list.append((Y_pred[i][0][steps_forward-1]))

As for me the input and output data of RNNs is quite confusing I am not sure whether for the test dataset I predict one day in advance meaning 96 time steps into the future. Actually what I want is to read historic data and then predict the next 96 time steps based on the historic 96 time steps. Can anyone of you tell me whether I am doing this by using this code or not?

Here I have a link to some test data that I just created randomly. Do not care about the actual values but just on the structure of the prediction: Download Test Data

Am I forecasting 96 steps in advance with the given code (my code is based on a tutorial that can be found here Tutorial RNN for electricity price prediction)?

Reminder: Can anyone tell me something about my question? Or do you need further information? If so, please tell me. I'll highly appreciate your comments and will be quite thankful for your help. I will also award a bounty for a useful answer.

Wrong select result and mismatch when multiple queries are executed same time

Posted: 08 Nov 2021 08:12 AM PST

This is my MySQL class shared between multiple other classes that needs SQL connections:

class mysqlAssist():      def __init__(self):          self.con = MySQLdb.connect("localhost","user","pass","DB",3306, charset="utf8")          self.con.autocommit(True)        @contextmanager      def cursor(self,  *args, **kwargs):          c = self.con.cursor( *args, **kwargs)          try:              yield c          finally:              c.close()        def fetchOne(self, _sql):          with self.cursor() as c:              try:                  c.execute(_sql)                  return c.fetchone()              except (MySQLdb.Error, MySQLdb.Warning) as e:                  print(e)          def fetchAll(self, _sql):          with self.cursor( MySQLdb.cursors.DictCursor ) as c:              try:                  c.execute(_sql)                  return c.fetchall()              except (MySQLdb.Error, MySQLdb.Warning) as e:                  print(e)  

My main module executes the following two SQL queries same time:

calling mysqlAssist.fetchOne:

SELECT `start_time`, `stop_time`, `AHC_DAY` FROM `_management` WHERE `_active` = '1'   

calling mysqlAssist.fetchAll:

SELECT _id, _desc, _show_type, _web_manage, _lastevent, _lastevent=max_lastevent               AS _latest FROM `_setup` CROSS JOIN (SELECT MAX(_lastevent) as max_lastevent FROM _setup) m  

The second query result looks like this:

+---------+-------------------+----------------+-----------------+----------------+-------------+  |     _id |     _desc         |  _show_type    |     _web_manage |     _lastevent |     _latest |  +---------+-------------------+----------------+-----------------+----------------+-------------+  | here>11  | A                 |              1 |               1 |     1633107217 |           0 |  |       6 | B                 |              1 |               0 |     1633168302 |           0 |  |      12 | C                 |              0 |               0 |     1633169804 |           1 |  |      16 | D                 |              0 |               1 |     1632480486 |           0 |  |      19 | E                 |              0 |               0 |     1632549333 |           0 |  |      26 | F                 |              0 |               0 |     1631868026 |           0 |  |      25 | G                 |              1 |               0 |     1633168303 |           0 |  |      27 | H                 |              1 |               1 |     1632992896 |           0 |  +---------+-------------------+----------------+-----------------+----------------+-------------+  

Now the problem: rarely and occasionally, the first query returns result of the second query in specific value: 11 which is the first line and first column result of the second query. Evidence of the problem is that my traceback shows following error as it is expecting completely different result (date) then pure number: ERROR - time data '11' does not match format '%H:%M:%S', so indeed this not the expected result from the database.

So how can happen, that there is data mismatch in result? Is this because the second query uses MySQLdb.cursors.DictCursor cursor type, or CROSS JOIN or there is some race for cursor when I make two queries at the same time? Should I implement some cursor lock or queue and how?

How to use tools:context to associtate Fragment files?

Posted: 08 Nov 2021 08:12 AM PST

Recently I came across the use-case of using tools:context in my layout files for activities. I learned that I can use it to associate an Activity class file to my layout file.

But how can I use it to associate a Fragment class file to any layout?

Bacuase as I have tried, it is only showing Activity list in autocomplete suggestion.

Blink animation in MUI

Posted: 08 Nov 2021 08:12 AM PST

I am building a GatsbyJS site with MUI. Using the withStyles HOC, is it possible to make a blinking animation? I tried providing animation in styles:

const styles = theme => ({          '@keyframes blinker': {              from: {opacity: 1},              to: {opacity: 0}          },          headerGT: {              color: 'white',              animation: ['blinker', '1s', 'linear', 'infinite'],              '-webkit-animation': ['blinker', '1s', 'linear', 'infinite'],              '-moz-animation': ['blinker', '1s', 'linear', 'infinite'],          }      })  

I can see the class and keyframes gets recognized and headerGT has the animation method when the DOM is build, but the animation does not fire. Any ideas?

Python MySQL not refreshing

Posted: 08 Nov 2021 08:13 AM PST

I have two programs: One that fill and updates a database and another that selects info from the database every 10 seconds.

I use Pymysql.

When I update the database I commit the data, I can see the results in the database with command lines, but the other program has the same output and doesn't get the new data!

Do I need to make a special query other than SELECT? Do I need to close the connection and reopen it before all query?

I create the GetData class when starting the program and get_data is called every 10 seconds.

class GetData:      def __init__(self):          self.conn = pymysql.connect(host='localhost', user='root', password='', db='mydb', charset='utf8mb4', cursorclass=pymysql.cursors.DictCursor)        def get_data(self, data):          with self.conn.cursor() as cursor:              self.sql = "SELECT id_data, somedata FROM mytable WHERE (%s = 'example');"              cursor.execute(self.sql, (data,))          return cursor.fetchall()        def close_conn(self):          self.conn.close()  

The program that fills the database:

class FillDb:      def __init__(self):          self.conn = pymysql.connect(host='localhost', user='root', password='', db='mydb', charset='utf8mb4', cursorclass=pymysql.cursors.DictCursor)          #added this line but doesen't help!          self.conn.autocommit(True)        def add_in_db(self, data):          with self.conn.cursor() as cursor:              self.sql = "INSERT INTO mytable (somedata) VALUES (%s);"              cursor.execute(self.sql, (data,))              self.conn.commit()  

Why not use python's assert statement in tests, these days?

Posted: 08 Nov 2021 08:12 AM PST

In Python testing, why would you use assert methods:

self.assertEqual(response.status_code, 200)  self.assertIn('key', my_dict)  self.assertIsNotNone(thing)  

As opposed to the direct assertions:

assert response.status_code == 200  assert 'key' in my_dict  assert thing is not None  

According to the docs:

These methods are used instead of the assert statement so the test runner can accumulate all test results and produce a report

However this seems to be bogus, a test runner can accumulate results and produce a report regardless. In a related post unutbu has shown that unittest will raise an AssertionError just the same as the assert statement will, and that was over 7 years ago so it's not a shiny new feature either.

With a modern test runner such as pytest, the failure messages generated by the assertion helper methods aren't any more readable (arguably the camelCase style of unittest is less readable). So, why not just use assert statements in your tests? What are the perceived disadvantages and why haven't important projects such as CPython moved away from unittest yet?

Shell function to tail a log file for a specific string for a specific time

Posted: 08 Nov 2021 08:13 AM PST

I need to the following things to make sure my application server is

  1. Tail a log file for a specific string
  2. Remain blocked until that string is printed
  3. However if the string is not printed for about 20 mins quit and throw and exception message like "Server took more that 20 mins to be up"
  4. If string is printed in the log file quit the loop and proceed.

Is there a way to include time outs in a while loop ?

No comments:

Post a Comment