Monday, October 18, 2021

Recent Questions - Stack Overflow

Recent Questions - Stack Overflow


Registered users do not appear in the admin panel

Posted: 18 Oct 2021 08:52 AM PDT

I am working on a project in Django where I am building a form so that users can register on the page; at the time of making a test registration of a user, I can not see the information of the same in the administration panel

urls.py

from django.urls import path    from . import views    urlpatterns = [      path('register/', views.registerPage, name="register"),  ]  

views.py

from django.shortcuts import render, redirect  from django.contrib.auth.forms import UserCreationForm    def registerPage(request):      form_value = UserCreationForm()        if request.method == 'POST':          form_value = UserCreationForm(request.POST)          if form_value.is_valid():              form_value.save()        context = {'form_key':form_value}      return render(request, 'accounts/register.html', context)  

register.html

<h3>Register</h3>    <form action="" method="POST">      {% csrf_token %}      {{form_key.as_p}}        <input type="submit" name="Create User">    </form>  

I don't know if I should register the view in "admin.py" to be able to see the users who do the registration ... Some help would do me good

How to do the mean and group in Python?

Posted: 18 Oct 2021 08:52 AM PDT

I have the following:

table = [['Country', 'Points'], ['Spain', '7'], ['Spain', '9'], ['Germany', '1'], ['Germany', '3']]

I want to do the mean of Spain (8) and Germany (2) and group them to:

table_result = [['Country', 'Points'], ['Spain', '8'], ['Germany', '2']]

Select distinct id and max(date)

Posted: 18 Oct 2021 08:52 AM PDT

enter image description here

I have this table(screenshot) and want to select max(date) + DISTINCT device_id. So the result would be

3|2021-10-13 19:01  2|2021-10-13 19:00  4|2021-10-13 18:59  

SELECT DISTINCT [device_id] ,[date] ,[active] FROM [Info] where active = 1 and cast(date as Date) = (select cast(max(date) as Date) as d from [Info]) group by device_id order by date DESC

In mysql group by would do the trick but in mssql it does not work as I expected.

I would like to overwrite two columns of my table from another table (pandas, jupyter notebook, python)

Posted: 18 Oct 2021 08:51 AM PDT

I have a main table ex:

  name   stock name    price    country  No.stock  

1 John Apple 160 US 4 2 Katie Tesla 800 US 10 3 Emma Samsung 70 KOR 50

John has 4 apple shares, Katie has 10 Tesla shares and Emma has 50 Samsung shares. But everyday, the stock share prices change and I want to update it once a day.

and the format is :

stock share price
Apple 150 Tesla 900 Samsung 110

I'd like to overwrite this specific two rows into the main table to be looked like: name stock name price country No.stock 1 John Apple 150 US 4 2 Katie Tesla 900 US 10 3 Emma Samsung 110 KOR 50

I tried to use 'merge' function / 'concat' function but there are always duplicates. Anyone knows a better way? Thank you :)

Spring boot JPA save with custom primary key

Posted: 18 Oct 2021 08:51 AM PDT

My Question is how to save an entity with custom primary key

For example, I have an entity like below:

@Entity  @Table(name = "customer")  @Data  @NoArgsConstructor  @AllArgsConstructor  public class Customer {    @EmbeddedId    private CustomerCompositeKey identity;    private String otherdata;  }  @Embeddable  @Data  @AllArgsConstructor  @NoArgsConstructor  public class CustomerCompositeKey implements Serializable {    @Column(name = "key_one")    private String keyOne;    @Column(name = "key_two")    private String keyTwo;  }  

Repository class:

@Repository  public interface CustomerRepository extends      JpaRepository<Customer, CustomerCompositeKey> {  }  

Currently, if I simply save I am getting a unique constraint exception. So I modified the code with a select update like below. Save code:

@Transactional  public void saveCustomer(CustomerBO customer){    Optional<Customer> customer = customerRepository.findById(customerCompositeKey);  if(competitorItemAttributesDB.isPresent()) {    }else{  }  customerRepository.save(customer);  }  

This is increasing my latency while saving. Is there any other way to just save without a select query? this block will be executed concurrently. if I am not wrong, customerRepository.save(customer) is already having select/update. But why am I getting a unique constraint error then?

Error with Permissions-Policy header: Unrecognized feature: 'interest-cohort'

Posted: 18 Oct 2021 08:51 AM PDT

I have just started react.

My page works fine on localhost.

Now I am trying to host my page on github.

I have used "npm run deploy" and hosted

This is my package.json

This is my package.json

Now when I am trying to access my page I run into errors and the first warning concerns me the most .

Errors

This is my page : Github Page

What is this "Permission Policy" and how do I fix it?

Pandas: Specify max delimiter with delim_whitespace, read_csv

Posted: 18 Oct 2021 08:51 AM PDT

I have the following results in a variable called results:

0 a b this is my first file  1 c d this is my second file  2 e f this is my third file  3 g h this is my fourth file  4 i j this is my fifth file  

I want to parse the results into a pandas DataFrame. The result I want is

Calling read_csv
0 a b this is my first file
1 c d this is my second file
2 e f this is my third file

Instead, when I called:

read_csv(StringIO(results), delim_whitespace=True), I get :

0 a b this is my first file
1 c d this is my second file
2 e f this is my third file

Is there any way to specify the max number of delimiter while using delim_whitespace ?

How to find the script src link?(Beautiful Soup)

Posted: 18 Oct 2021 08:52 AM PDT

tags = [{tag.name: tag.text.strip()} for tag in soup.find_all('h2')]  

This returns as:

[{'h2':'My'},{'h2':'hey'}] # Returns all the h2 elements with their content.  

Now I want all the links that are inside the <script src =''> in the above format.

Suppose, For the HTML code,

<script src="https://apis.google.com/_/scs/abc-static/_/js/k=gapi.gapi.en.hvE_rrhCzPE.O/m=gapi_iframes,googleapis_client/rt=j/sv=1/d=1/ed=1/rs=AHpOoo-98F2Gk-siNaIBZOtcWfXQWKdTpQ/cb=gapi.loaded_0" nonce="" async=""></script>  

The result should be

#Both Acceptable    [{'script':'https://apis.google.com/_/scs/abc-static/_/js/k=gapi.gapi.en.hvE_rrhCzPE.O/m=gapi_iframes,googleapis_client/rt=j/sv=1/d=1/ed=1/rs=AHpOoo-98F2Gk-siNaIBZOtcWfXQWKdTpQ/cb=gapi.loaded_0'}]    OR    [{'script src':'https://apis.google.com/_/scs/abc-static/_/js/k=gapi.gapi.en.hvE_rrhCzPE.O/m=gapi_iframes,googleapis_client/rt=j/sv=1/d=1/ed=1/rs=AHpOoo-98F2Gk-siNaIBZOtcWfXQWKdTpQ/cb=gapi.loaded_0'}]  

Firebase transaction deletes node

Posted: 18 Oct 2021 08:51 AM PDT

When user makes an in-app purchase, I want their tokens count to decrement by the cost. This will just delete the user's tokens

(This code used to work and broke recently)

    var senderRef = db.ref(`users/${sender.id}/tokens`)      var amount = 1000      var tokensCount =  await (await senderRef.once('value')).val()      console.log("SENDER TOKENS: ", tokensCount, sender.id)      await senderRef.transaction(async function(currentTokens) {          var newAmount = currentTokens - amount          if (newAmount >= 0) {              return newAmount          } else {              return           }      }).then(async (result) => {          var tokensCount = await (await senderRef.once('value')).val()          console.log("SENDER TOKENS AFTER TRANSACTION: ", tokensCount, sender.id)      })  

I have data structured as: enter image description here

Postgres 10 -> 13 - version mismatch after running pg_upgrade

Posted: 18 Oct 2021 08:52 AM PDT

I used the following commands to migrate from Postgres 10.3 to 13.4, on Ubuntu 18.04:

1.  pg_dumpall -p 5432 > backup.sql    2.    wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -   echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" |sudo tee  /etc/apt/sources.list.d/pgdg.list  apt update  apt install -y postgresql-13    3.    su postgres  export PGDATA=/var/lib/postgresql/13/data    cd ~   /usr/lib/postgresql/13/bin/initdb --locale=en_US.UTF-8 --lc-ctype=en_US.UTF-8 --lc-collate=en_US.UTF-8 --encoding=UTF-8 -U <username>    4.   systemctl stop postgresql    5.  export PGDATAOLD=/var/lib/postgresql/10/data  export PGDATANEW=/var/lib/postgresql/13/data  export PGBINOLD=/usr/lib/postgresql/10/bin  export PGBINNEW=/usr/lib/postgresql/13/bin    /usr/lib/postgresql/13/bin/pg_upgrade --old-options '-c config_file=/etc/postgresql/10/main/postgresql.conf' --new-options '-c config_file=/etc/postgresql/13/main/postgresql.conf' --username <username> --check    /usr/lib/postgresql/13/bin/pg_upgrade --old-options '-c config_file=/etc/postgresql/10/main/postgresql.conf' --new-options '-c config_file=/etc/postgresql/13/main/postgresql.conf' --username <username>    6.  exit # go back from postgres user to root  mkdir -p /etc/postgresql/13/main  cp /etc/postgresql/10/main/pg_hba.conf /etc/postgresql/13/main/pg_hba.conf  cp /etc/postgresql/10/main/pg_ident.conf /etc/postgresql/13/main/pg_ident.conf    cp /etc/postgresql/10/main/postgresql.conf /etc/postgresql/13/main/postgresql.conf  ^ change those 2 entries to the following in this file:      data_directory = '/var/lib/postgresql/13/data'   log_filename = 'postgresql-13-main.log'    then run:      chown -R postgres:postgres /etc/postgresql/13/main  systemctl start postgresql@13-main    7.      su postgres     ./analyze_new_cluster.sh     8.   exit # go back from postgres user to root   systemctl enable postgresql@13-main  

The upgrade went successful according to the pg_upgrade output. Database is running, so I executed the following to make sure I'm on 13.4:

=# select version();                                                               version                                                               ---------------------------------------------------------------------------------------------------------------------------------   PostgreSQL 13.4 (Ubuntu 13.4-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit  (1 row)  

..and made another check:

=# SHOW server_version;            server_version            ----------------------------------   13.4 (Ubuntu 13.4-1.pgdg18.04+1)  (1 row)  

Data directory points at correct location:

# SHOW data_directory;         data_directory          -----------------------------   /var/lib/postgresql/13/data  (1 row)  

Then I checked the content of PG_VERSION:

# cat /var/lib/postgresql/13/data/PG_VERSION   10  

So there's a mismatch between what Postgres detects and what PG_VERSION shows.

I also checked the timestamps of files in pg_wal folder, and they haven't been updated since the day the upgrade was done. No new files were found either.

I cloned the state of the database locally to do more tests, and when I started the database I got this output:

2021-10-18 13:26:31.009 GMT [212182]     616d7607.33cd6 FATAL:  database files are incompatible with server  2021-10-18 13:26:31.009 GMT [212182]     616d7607.33cd6 DETAIL:  The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 13.4 (Ubuntu 13.4-0ubuntu0.21.04.1).  pg_ctl: could not start server  Examine the log output.  

From what I understand, this means that I can stop the currently running Postgres 13 instance, but starting it would result in the error above.

How is it possible that the data folder was initialized by Postgres 10? Wouldn't it be detected by pg_upgrade --check and simply fail afterwards?

And the most important question is - how can I fix the cluster so it's 'completely' on version 13? Is it possible to do it without downtime visible for the users, perhaps by using replication?

Thank you!

Why not all the items in the List that not contains the string not removed from the List?

Posted: 18 Oct 2021 08:52 AM PDT

var g = urls;  if (g.Count > 1)  {       for(int i = g.Count - 1; i > 0; i--)       {            if (!g[i].Contains("Test"))            {                  g.RemoveAt(i);            }       }   }  

I create a copy of the List then checking for specific string in each item but there are 23 items left in g one of them is not containing the word "Test" but was not removed it's the first item in the list at index 0.

define a function iterativ (SGD)

Posted: 18 Oct 2021 08:51 AM PDT

Im trying to define the following SGD algorithm for functions:

1: Input: T>0, r>0,   2: g_(0)(.) = 0  3: for t < T:         take a sample (x_t,y_t)         g_(t+1)(.) = g_(t)(.) - r*(g_(t)(x_t)-y_t)*k(x_t, . )    

where k is some kernel function. Does anyone know how to implement the iterativ functions g_t in a smart way in python?

How to compare two values from input text?

Posted: 18 Oct 2021 08:52 AM PDT

what's wrong with my code? The true alert never comes out even if I put a right answer,0414. Only false alert comes out.

var answer = String(document.getElementById('Properdate').value);  var rightanswer = '0414';   
<input id="Properdate" type="text">    <input type="submit" value="submit" onclick="if(answer === rightanswer){   alert('good'); } else { alert('Try again');}" >   

How to decrease the line spacing in this python program?

Posted: 18 Oct 2021 08:52 AM PDT

I was learning nested loops, and ran into this problem with line spacing between each of x's lines.

numbers = [2, 2, 2, 2, 7, 7]    for i in numbers:      for j in range(0, i):          print("x", end='')      print('\n')  

Following is the output of my code:

xx    xx    xx    xx    xxxxxxx    xxxxxxx  

What changes should I make in my code so that an additional line is not present between each x's line?

Django QueryDict How to ensure that the "plus" does not disappear in the QueryDict?

Posted: 18 Oct 2021 08:52 AM PDT

How to ensure that the "plus" does not disappear in the QueryDict?

I am trying to parse the received get-query into a dict:

from urllib.parse import quote_plus    my_non_safe_string = "test=1+1" # This example, the string can be anything. (in GET query format)  QueryDict(my_non_safe_string)  out: <QueryDict: {'test': ['1 1']}>    my_safe_string = quote_plus("test=1+1") # 'test%3D1%2B1'  out: <QueryDict: {'test=1+1': ['']}>  

I would like to get the following result:

<QueryDict: {'test=1+1': ['1+1']}>  

forkJoin but next observable depends on the one before

Posted: 18 Oct 2021 08:51 AM PDT

https://www.learnrxjs.io/learn-rxjs/operators/combination/forkjoin

const example = forkJoin({    // emit 'Hello' immediately    sourceOne: of('Hello'),    // emit 'World' after 1 second    sourceTwo: of('World').pipe(delay(1000)),    // throw error    sourceThree: throwError('This will error')  }).pipe(catchError(error => of(error)));    // output: 'This will Error'  const subscribe = example.subscribe(val => console.log(val));  

That's the main implementation but in my case, to call sourceTwo I need to use the data from sourceOne and the same with sourceThree. All calls need the previous observable in order to fetch data.

I only care about final result, don't need to merge anything, just do what this example does, show what sourceThree returns

Convert arbitrary string values to timestamps SQL

Posted: 18 Oct 2021 08:51 AM PDT

I am wondering if there is a way to convert arbitrary string values (such as the examples below) to something that can be interpreted as a timestamp, perhaps in days.

Dropdown_values Desired Output(days)
12 weeks 84
1 Week 4 Days 11
1 Year 365
1 Year 1 Week 2 Days 374

The idea I had was to split part out the values since they are all separated by spaces and then do the addition in a separate column, are there other (better) ways to do this? Thank you.

How to use dl.Overlay with multiple inputs?

Posted: 18 Oct 2021 08:52 AM PDT

I tried to apply dl.Overlay on multiple inputs (markers and circles) but it shows me an overlay for each input separately.

enter image description here

I want to have at the end a single overlay for all the markers and the circles around. Any suggestions ? Here's the code i implemented.

import dash_leaflet as dl    app = dash.Dash()    app.layout = dl.Map(  [  dl.LayersControl([  dl.Overlay([dl.CircleMarker(center=(48.0073849, 0.1967849), radius=3, color='red'),  dl.Circle(center=(48.0073849, 0.1967849), radius=20000),  dl.CircleMarker(center=(46.0073849, 0.1867849), radius=3, color='red'),  dl.Circle(center=(46.0073849, 0.1867849), radius=20000),  ],  name='Exemples', checked=True)  ]),  dl.TileLayer(),  ],  style={'width': '1000px', 'height': '500px'},  center=[46.232192999999995, 2.209666999999996],  zoom=5  )    if name == 'main':      app.run_server()  

Export single item to csv Django

Posted: 18 Oct 2021 08:52 AM PDT

I have a model called leads and am trying to export a single lead from my database. Currently I am only able to export all of the leads.

Model.py

class Lead(models.Model):     transfer_date=models.DateField(blank=True, null=True)     callback_date = models.DateField(blank=True, null=True)     contact_date = models.DateField(blank=True, null=True)     first_name = models.CharField(max_length=20)     last_name = models.CharField(max_length=20)     address = models.CharField(default=0, max_length=50)     city = models.CharField(max_length=30, default="")     state = models.CharField(max_length=20, default="")     zipcode = models.IntegerField(default=0)     phone = models.CharField(max_length=10, null=True, default="", blank=True)     cell = models.CharField(max_length=10, null=True, default="", blank=True)     email = models.EmailField(default="")       def __str__(self):      return f"{self.first_name} {self.last_name}"  

Views.py

def export(request):     response = HttpResponse(content_type='text/csv')     writer = csv.writer(response)     writer.writerow(['First Name', 'Last Name', 'Email'])       for lead in Lead.objects.all().values_list('first_name', 'last_name', 'email'):      writer.writerow(lead)       response['Content-Disposition'] = 'attachment; filename="Lead.csv"'       return response  

Select value from comma separated values in cell based on previous and next values

Posted: 18 Oct 2021 08:52 AM PDT

I have a large database, a subset of which looks like this

ID    year        value1      value2   1    2000   203,305,701     1, 2, 1   1    2001       203,504        1, 1   1    2002           203           1   2    2010           245           3   2    2011       245,332        2, 1   2    2012           332           3   2    2013           332           2   2    2014       245,332        2, 1  

Reproducible code:

structure(list(   ID = c("1", "1", "1", "2", "2", "2", "2", "2"),   year = c("2000", "2001", "2002", "2010", "2011", "2012",   "2013", "2014"), value1 = c("203, 305, 701",   "203, 504", "203", "245", "245, 332",   "332", "332", "245, 332"), value2 = c("1, 2, 1",   "1, 1", "1", "3", "2, 1", "3", "2", "2, 1")), class = "data.frame", row.names = c(NA, -8L))  

"value1" and "value2" contain comma-separated values. The objective is to simplify the "value1" column to a single value. The algorithm I've thought out goes like this:

  1. Check for previous and next values for each row while grouping by ID and year (taking intersections: i.e. the common value in two consecutive rows). For example, for row 5: The intersection of {245, 332} with the previous row {245} for value1 is 245, while with the next row {332} it is 332
  2. Prefer next value over previous value for selection. I want to prioritise the next value i.e. {332} in this split decision.
  3. If either intersection does not narrow down to a single value, select value1 based on max(value2). If value2 does not have a maximum, select randomly. The third step does not come into play since a single value is selected based on the first two steps.

The algorithm continues to the next row as soon as a single value is reached. Previous and next refers to the preceding and the following row respectively.

Similarly, for row 1: The intersection is 203 with only the next row, as we stopped the algorithm as soon as we arrived at a single value.

The final data should look like this

ID    year        value1      value2   1    2000           203     1, 2, 1   1    2001           203        1, 1   1    2002           203           1   2    2010           245           3   2    2011           332        2, 1   2    2012           332           3   2    2013           332           2   2    2014           332        2, 1  

I tried writing a basic code in R to loop over each row grouping by "ID" and "year" since I have no idea which package to use for this and going case by case, but it seems to me that this might not be the most efficient method. (I am also very new to R)

How to obtain the imp loss per account instead of per campaign in Google ads API?

Posted: 18 Oct 2021 08:52 AM PDT

We are thinking of obtaining the imp loss per account instead of per campaign in Google ads API.

In the course, we've found out that we cannot obtain the imp loss per account which uses 'select from', and we would like to get your insight in regards to how we can achieve it and what api method is recommend.

Mongoldb not running on Mac OS Big Sur

Posted: 18 Oct 2021 08:51 AM PDT

When I try to run Mongodb via brew with

"brew services start mongodb/brew/mongodb-community"

I get the following error message :

"Error: Operation not permitted @ apply2files - /Users/username/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist".

Mongodb is installed correctly but it seems is not allowed to run by my MacBook M1, any idea why? Thanks in advance

Flux toIterable not lazy as it stated in the doc

Posted: 18 Oct 2021 08:51 AM PDT

I'm working on a school project using Reactor and am running into some issues with Flux.

On the upstream, I have a Flux that is reading from database and generating rows for the downstream processor to use. For example:

public Flux<Row> emitRow(...)   {     return Flux.create(cursor -> {       ... logic to read and check db ...       emitRow(cursor, row);       cursor.complete()    });  }  

Reading everything into memory is not ideal since our sandbox environment has a limited amount of memory. So, what I have setup is every time we read a row, we process it before getting the next row.

Initially, I have something like this setup and it seems to work, however, my instructor said we need to use toIterable() instead.

public Mono<Outcome> process(Flux<Row> data)  {      return data          .flatMap(row -> processRow(row), SINGLE_THREAD, SINGLE_THREAD)          .map(results -> new Outcome(results));  }  

So right now, I'm trying to make it work with toIterable() but having trouble. That is what I have in mind, but it is not behaving as stated in the JavaDoc. It looks like with this, there are 2 queues involved. First one is the iterable iterator's blocking queue and the second is from the original Flux.create. And the second queue (from Flux.create) seems to be buffering everything into memory causing out of memory exception in my application.

public Mono<Outcome> process(Flux<Row> data)  {      Iterator<Row> itr = data.toIterable(1).iterator();      return Mono.fromCallable(() -> processRow(itr))              .map(results -> new Outcome(results));  }    public Results processRow(Iterator<Row> itr)   {     while(itr.hasNext())     {        Row r = itr.next();        dbContentBuilder.handle(r);     }     return new Results(dbContentBuilder.build());  }  

Any idea why or how to make it work with iterable? The documentation seems to suggest that this is a lazy queue and will block when ask for the .next().

Displaying many high-resolution images on an HTML canvas (map tiling)

Posted: 18 Oct 2021 08:51 AM PDT

I'm using three.js to display a globe. At first, the image is low-quality, and as a user zooms in, the images become higher quality. This is done using tiling. Each tile is 256px x 256px. For the lowest zoom, there are only a couple tiles, and for the largest, there are thousands.

The issue is that the images are still low quality, even at the highest zoom. I think this is because of the canvas I'm using. It's 2000px x 1000px. Even if I increase this canvas, the image at its highest quality is 92160px x 46080px, which is too large of a canvas to render in most browsers.

What approach can I use to display tiles at high quality, but not have a huge canvas? Is using a canvas the right approach? Thanks!

Cumulative count of different strings in a column based on value of another column

Posted: 18 Oct 2021 08:52 AM PDT

I've got a df that looks like this with duplicate ID's

     ID    Usage_type  0     5    Note  1     6    Note  2     7    Service  3     5    Note  4     7    Note  5     10   Service  

I want an extra two columns that indicate the cumulative count of usage_type for each ID like so:

     ID    Usage_type   type_Note    type_Service  0     5    Note         1            0  1     6    Note         1            0  2     7    Service      0            1  3     5    Note         2            0  4     7    Note         1            1  5     10   Service      0            1  

I've used cumulative count to get the total count of Usage_type for each ID but want to break it down further into separate counts for each string.

Screenshot below shows what the current counts for an example ID enter image description here

coveragePathIgnorePatterns - ignore files with specific ending

Posted: 18 Oct 2021 08:51 AM PDT

Jest: I am trying to ignore all files that end with .stories.tsx for example SomeFileName.stories.tsx. Added in my package.json -> *.stories.tsx inside the coveragePathIgnorePatterns from jest like so:

"jest": {      "coveragePathIgnorePatterns": [          ...          "*.stories.tsx"      ]  }  

Unfortunately, running tests will throw the following error for all my tests:

● Test suite failed to run

SyntaxError: Invalid regular expression: /*.stories.tsx/: Nothing to repeat      at String.match (<anonymous>)      ...  

What do I need to add inside coveragePathIgnorePatterns to make this work?

Next.js Fetch data in HOC from server in SSG

Posted: 18 Oct 2021 08:51 AM PDT

I created new app with Next.js 9.3.1.

In old app with SSR. I can use getInitialProps function in HOC components (not in the page), so I can fetch data from server in the HOC component and from page. Like this https://gist.github.com/whoisryosuke/d034d3eaa0556e86349fb2634788a7a1

Example :

export default function withLayout(ComposedComponent) {    return class WithLayout extends Component {      static async getInitialProps(ctx) {        console.log("ctxlayout fire");        const { reduxStore, req } = ctx || {};        const isServer = !!req;        reduxStore.dispatch(actions.serverRenderClock(isServer));          if (isServer)          await reduxStore.dispatch(navigationActions.getListMenuAction("menu"));        // Check if Page has a `getInitialProps`; if so, call it.        const pageProps =          ComposedComponent.getInitialProps &&          (await ComposedComponent.getInitialProps(ctx));        // Return props.        return { ...pageProps };      }        render() {        return (          <div className="app__container">            <Header />            <Navbar />            <ComposedComponent {...this.props} />          </div>        );      }    };  }  

But in new version of Next.js, with SSG, I can't find the way to use getStaticProps or getServerSideProps in HOC components. If I use getInitialProps in HOC (layout), I won't be able to use getStaticProps or getServerSideProps in child.

So, how can I use getStaticProps or getServerSideProps to fetch data and pre-render in both HOC component and page?

Thanks.

How to enable MySQL with PHP 7

Posted: 18 Oct 2021 08:52 AM PDT

I know that this is deprecated and MSQLI and PDO are the alternatives. But I have developed a CMS in which I am still using MySQL. and it will take weeks to change all the quires. So is there any solution that I can use MYSQL with PHP 7 now? or it's impossible.

mysql_connect()  mysql_select_db()  

etc

Custom Http Status Code in Spring

Posted: 18 Oct 2021 08:52 AM PDT

I am using Spring Boot and I am using Exception Classes thoughout my business logic code. One might look like this:

@ResponseStatus(HttpStatus.BAD_REQUEST)  public class ExternalDependencyException extends RuntimeException {        public ExternalDependencyException() {          super("External Dependency Failed");      }      public ExternalDependencyException(String message) {          super(message);      }    }  

Well now there are Exception, where no predefined Http Status code is fitting, so I would like to use a status code like 460 or similar, which is still free, but the annotation ResponseStatus just accepts values from the enum HttpStatus. Is there a way to achieve an Exception class with a custom status code in the java spring boot environment?

UIAlertController sometimes prevents UIRefreshControl to hide

Posted: 18 Oct 2021 08:51 AM PDT

I'm using UIRefreshControl on my tableview to update items. And the end, I show a UIAlertController to inform the user that the update was complete, and how many items were updated. Pretty straightforward, and it works fine for one thing. If I pull to refresh several times in a row, sometimes the refresh control is not dismissed, even after dismissing the alert. I need to swipe up the table to make it go away.

This is the code I use, all UI stuff is nicely done on the main thread:

    if(refreshControl.refreshing) {          dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{              [self refreshItems];                dispatch_async(dispatch_get_main_queue(), ^{                  [refreshControl endRefreshing];                  [self.tableView reload];                  [self showUpdateInfo];                                   });          });      }  

Any idea what could cause this?

EDIT: This is how I create the refresh control in code (in viewDidLoad):

UIRefreshControl *refreshControl = [[UIRefreshControl alloc] init];  refreshControl.attributedTitle   = [[NSAttributedString alloc] initWithString: @"Checking for updates…"];  [refreshControl addTarget: self                     action: @selector(refreshOutdatedItems)           forControlEvents: UIControlEventValueChanged];    self.refreshControl = refreshControl;  

No comments:

Post a Comment