Fetch API headers are not correctly received after GET request Posted: 12 Jul 2021 09:47 AM PDT I am using the Fetch API to download a file and the code to do that looks as follows: const response = fetch(`{% url 'export_xls' %}?${urlParams}`,{ method: 'GET', mode: 'cors', headers: { "X-CSRFToken": '{{ csrf_token }}', } }) .then(res => res.blob()) .then(blob => { let file = window.URL.createObjectURL(blob); window.location.assign(file); }) This does result in a downloaded file, however, the name of the file is randomized, instead of using the filename given in the Content-Disposition header. The headers, which I could see in the chrome dev tools are only the Content-Length and Content-Type , however, when I console log the headers of the response, the Content-Disposition header is there. The django view, which is being called with the fetch API sets the header, so I do not understand why the filename is not correct: response = FileResponse(buffer, as_attachment=True, filename="foo.xlsx") response['Content-Type'] = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' response['Content-Disposition'] = 'attachment; filename="foo.xlsx' response['Access-Control-Expose-Headers'] = 'Content-Disposition' # I read it could be a CORS problem and this could solve it, however, no luck return response After some googling, I found multiple threads where people recommend using an a tag element and setting the download attribute, however, as the Content-Disposition header is set correctly, I would expect to have the file with the proper filename, without doing any additional work in the JS code.  |
Need the largest negative number from a list in Python Posted: 12 Jul 2021 09:46 AM PDT The assignment is: - Create an empty list called temperatures. - Allow the user to input a series of temperatures along with a sentinel value. (do not use a number for a sentinel value) which will stop the user input. - Evaluate the temperature list to determine the largest and smallest temperature. - Print the largest temperature. - Print the smallest temperature. - Print a message that tells the user how many temperatures are in the list.
The issue I am having is that if my list contains [-11, -44, -77] my program prints -11 as the lowest temperature. But I need it to print -77. Below is my code: # Create a list called temperatures to capture user input temperatures_list = [] # Create a while loop to capture user input into the temperatures_list<br /> if __name__ == "__main__":<br /> while True: enter_number = (input("Please enter a temperature (enter stop to end): ")) if enter_number == "stop": break temperatures_list.append(enter_number) lowest_temp = min(temperatures_list) highest_temp = max(temperatures_list) total_temps = len(temperatures_list) # Print output of temperatures input by user print(f"The numbers you input are:", temperatures_list) print(f"The lowest temperature you input is: ", lowest_temp) print(f"The highest temperature you input is: ", highest_temp) print(f"There are a total of {total_temps} temperatures in your list")  |
OCR returns "--“--mode=client --port=52085” and crashes Posted: 12 Jul 2021 09:46 AM PDT I am trying to run a OCR package known as Paddle OCR but even after successful installation of package I get following error after running the code and do not know how to resolve this :- from paddleocr import PaddleOCR ocr = PaddleOCR(lang="korean") Error :- usage: pydevconsole.py [-h] [--use_gpu USE_GPU] [--ir_optim IR_OPTIM] [--use_tensorrt USE_TENSORRT] [--gpu_mem GPU_MEM] [--image_dir IMAGE_DIR] [--det_algorithm DET_ALGORITHM] [--det_model_dir DET_MODEL_DIR] [--det_max_side_len DET_MAX_SIDE_LEN] [--det_db_thresh DET_DB_THRESH] [--det_db_box_thresh DET_DB_BOX_THRESH] [--det_db_unclip_ratio DET_DB_UNCLIP_RATIO] [--det_east_score_thresh DET_EAST_SCORE_THRESH] [--det_east_cover_thresh DET_EAST_COVER_THRESH] [--det_east_nms_thresh DET_EAST_NMS_THRESH] [--rec_algorithm REC_ALGORITHM] [--rec_model_dir REC_MODEL_DIR] [--rec_image_shape REC_IMAGE_SHAPE] [--rec_char_type REC_CHAR_TYPE] [--rec_batch_num REC_BATCH_NUM] [--max_text_length MAX_TEXT_LENGTH] [--rec_char_dict_path REC_CHAR_DICT_PATH] [--use_space_char USE_SPACE_CHAR] [--cls_model_dir CLS_MODEL_DIR] [--cls_image_shape CLS_IMAGE_SHAPE] [--label_list LABEL_LIST] [--cls_batch_num CLS_BATCH_NUM] [--cls_thresh CLS_THRESH] [--enable_mkldnn ENABLE_MKLDNN] [--use_zero_copy_run USE_ZERO_COPY_RUN] [--use_pdserving USE_PDSERVING] [--lang LANG] [--det DET] [--rec REC] [--use_angle_cls USE_ANGLE_CLS] pydevconsole.py: error: unrecognized arguments: --mode=client --port=61156 Process finished with exit code 2 anyone has an idea how to move forward to resolve the error ?  |
Reading firebase storage image security rules Posted: 12 Jul 2021 09:46 AM PDT I am using Firebase storage and firestore with flutter, I came across two options to retrieve Firebase storage image Setting Firebase storage image url in firestore database and then fetching it with network image Getting image url from Firebase storage directly I don't know much about tokens. My security rules states that only auth users can read my Firebase storage but if I use first option my image url with token is stored in my firestore database using that url anyone can access my storage. I am not sure does Firebase refresh it's storage token automatically then if this is the case my app will experience crash. Which is the most secure and long lasting way or please answer if any other secure way to fetch images  |
Query Data Type Mismatch in Criteria Expression Access Posted: 12 Jul 2021 09:46 AM PDT I am trying to filter out all Names that do not match in my data base using Same?:StrComp(Str1, Str2). When I run the the field in my Query My Columns have 0's for matching and 1,-1, even #ERROR because one side of the compare is empty. Same? Our DataBase Name Other DataBase Name ID 0 Aaron B AARON B 00002 1 Aaron P. AARON J M P 00003 #Error Ainsley W #Error 00004 So I tried in the Criteria section in design view Not = 0 to get all entries that are not the same, but I get the error message Data Type Mismatch. I then triedIff(StrComp(Str1, Str2)=0,"Yes","No") then in the Criteria section "No" Same error came up. Any thoughts?  |
Replacing the -(hyphen) with _(underscore) only in strings and not in numbers in shell script Posted: 12 Jul 2021 09:47 AM PDT I have this JSON file { "name": "John", "emp-id": 1, "age": 30, "balance-amount-1": "-10000.80", "balance-amount-2": "0", "salary-amount": "20000", "Total" : "9999.2" } which needs to be processed, where I need to replace "emp-id" , "balance-amount-1" , "balance-amount-2" , "salary-amount" with "emp_id" ,"balance_amount_1" ,"balance_amount_2" ,"salary_amount" leaving out "-10000.00" which is a negative number. Also remove double quotes around only numbers and not string.(eg Replace "-10000.80" to -10000.80 ) Final JSON file { "name": "John", "emp_id": 1, "age": 30, "balance_amount_1": -10000.00, "balance_amount_2": 0, "salary_amount": 20000, "Total": 9999.2 } I tried few things using sed but could not arrive at a solution working for all the scenarios.  |
std::vector.emplace_back() and std::vector.push_back() [duplicate] Posted: 12 Jul 2021 09:46 AM PDT Is there a difference between the two? If so, when and where should I use them? I'm conflicted on which I should use in my code to add another element of type class, so each element can be an object.  |
How Can Group Dataframe in to columns? Posted: 12 Jul 2021 09:46 AM PDT Consider a dataframe: timestamp value 0 2019-07-12 18:00:00 8.46 1 2019-07-13 06:00:00 12.02 2 2019-07-13 18:00:00 15.58 3 2019-07-14 06:00:00 16.29 4 2019-07-14 18:00:00 17.00 I want to transform in to: timestamp X1 X2 0 2019-07-12 8.46 NaN 1 2019-07-13 12.02 15.58 2 2019-07-14 16.29 17.00 How can this be done? I tried pd.groupby with Grouper and then doing a for loop like below: for ix, i in resampled_df.groupby(pd.Grouper(key='timestamp', freq="1D")): print(i.head()) Not luck!  |
RuntimeError: all elements of input should be between 0 and 1 Posted: 12 Jul 2021 09:46 AM PDT I want to use an RNN with bilstm layers using pytorch on protein embeddings. It worked with Linear Layer but when i use Bilstm i have a Runtime error. Sorry if its not clear its my first publication and i will be grateful if someone can help me. from collections import Counter, OrderedDict from typing import Optional import numpy as np import pytorch_lightning as pl import torch import torch.nn.functional as F # noqa from deepchain import log from sklearn.model_selection import train_test_split from sklearn.utils.class_weight import compute_class_weight from torch import Tensor, nn num_layers=2 hidden_size=256 from torch.utils.data import DataLoader, TensorDataset def classification_dataloader_from_numpy( x: np.ndarray, y: np.array, batch_size: int = 32 ) -> DataLoader: """Build a dataloader from numpy for classification problem This dataloader is use only for classification. It detects automatically the class of the problem (binary or multiclass classification) Args: x (np.ndarray): [description] y (np.array): [description] batch_size (int, optional): [description]. Defaults to None. Returns: DataLoader: [description] """ n_class: int = len(np.unique(y)) if n_class > 2: log.info("This is a classification problem with %s classes", n_class) else: log.info("This is a binary classification problem") # y is float for binary classification, int for multiclass y_tensor = torch.tensor(y).long() if len(np.unique(y)) > 2 else torch.tensor(y).float() tensor_set = TensorDataset(torch.tensor(x).float(), y_tensor) loader = DataLoader(tensor_set, batch_size=batch_size) return loader class RNN(pl.LightningModule): """A `pytorch` based deep learning model""" def __init__(self, input_shape: int, n_class: int, num_layers, n_neurons: int = 128, lr: float = 1e-3): super(RNN,self).__init__() self.lr = lr self.n_neurons=n_neurons self.num_layers=num_layers self.input_shape = input_shape self.output_shape = 1 if n_class <= 2 else n_class self.activation = nn.Sigmoid() if n_class <= 2 else nn.Softmax(dim=-1) self.lstm = nn.LSTM(self.input_shape, self.n_neurons, num_layers, batch_first=True, bidirectional=True) self.fc= nn.Linear(self.n_neurons, self.output_shape) def forward(self, x): h0=torch.zeros(self.num_layers, x_size(0), self.n_neurons).to(device) c0=torch.zeros(self.num_layers, x_size(0), self.n_neurons).to(device) out, _=self.lstm(x,(h0, c0)) out=self.fc(out[:, -1, :]) return self.fc(x) def training_step(self, batch, batch_idx): """training_step defined the train loop. It is independent of forward""" x, y = batch y_hat = self.fc(x).squeeze() y = y.squeeze() if self.output_shape > 1: y_hat = torch.log(y_hat) loss = self.loss(y_hat, y) self.log("train_loss", loss, on_epoch=True, on_step=False) return {"loss": loss} def validation_step(self, batch, batch_idx): """training_step defined the train loop. It is independent of forward""" x, y = batch y_hat = self.fc(x).squeeze() y = y.squeeze() if self.output_shape > 1: y_hat = torch.log(y_hat) loss = self.loss(y_hat, y) self.log("val_loss", loss, on_epoch=True, on_step=False) return {"val_loss": loss} def configure_optimizers(self): """(Optional) Configure training optimizers.""" return torch.optim.Adam(self.parameters(),lr=self.lr) def compute_class_weight(self, y: np.array, n_class: int): """Compute class weight for binary/multiple classification If n_class=2, only compute weights for the positve class. If n>2, compute for all classes. Args: y ([np.array]):vector of int represented the class n_class (int) : number fo class to use """ if n_class == 2: class_count: typing.Counter = Counter(y) cond_binary = (0 in class_count) and (1 in class_count) assert cond_binary, "Must have O and 1 class for binary classification" weight = class_count[0] / class_count[1] else: weight = compute_class_weight(class_weight="balanced", classes=np.unique(y), y=y) return torch.tensor(weight).float() def fit( self, x: np.ndarray, y: np.array, epochs: int = 10, batch_size: int = 32, class_weight: Optional[str] = None, validation_data: bool = True, **kwargs ): assert isinstance(x, np.ndarray), "X should be a numpy array" assert isinstance(y, np.ndarray), "y should be a numpy array" assert class_weight in ( None, "balanced", ), "the only choice available for class_weight is 'balanced'" n_class = len(np.unique(y)) weight = None self.input_shape = x.shape[1] self.output_shape = 1 if n_class <= 2 else n_class self.activation = nn.Sigmoid() if n_class <= 2 else nn.Softmax(dim=-1) if class_weight == "balanced": weight = self.compute_class_weight(y, n_class) self.loss = nn.NLLLoss(weight) if self.output_shape > 1 else nn.BCELoss(weight) if validation_data: x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.2) train_loader = classification_dataloader_from_numpy( x_train, y_train, batch_size=batch_size ) val_loader = classification_dataloader_from_numpy(x_val, y_val, batch_size=batch_size) else: train_loader = classification_dataloader_from_numpy(x, y, batch_size=batch_size) val_loader = None self.trainer = pl.Trainer(max_epochs=epochs, **kwargs) self.trainer.fit(self, train_loader, val_loader) def predict(self, x): """Run inference on data.""" if self.output_shape is None: log.warning("Model is not fitted. Can't do predict") return return self.forward(x).detach().numpy() def save(self, path: str): """Save the state dict model with torch""" torch.save(self.fc.state_dict(), path) log.info("Save state_dict parameters in model.pt") def load_state_dict(self, state_dict: "OrderedDict[str, Tensor]", strict: bool = False): """Load state_dict saved parameters Args: state_dict (OrderedDict[str, Tensor]): state_dict tensor strict (bool, optional): [description]. Defaults to False. """ self.fc.load_state_dict(state_dict, strict=strict) self.fc.eval() mlp = RNN(input_shape=1024, n_neurons=1024, num_layers=2, n_class=2) mlp.fit(embeddings_train, np.array(y_train),validation_data=(embeddings_test, np.array(y_test)), epochs=30) mlp.save("model.pt") **Error 1** --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-154-e5fde11a675c> in <module> 1 # init MLP model, train it on the data, then save model 2 mlp = RNN(input_shape=1024, n_neurons=1024, num_layers=2, n_class=2) ----> 3 mlp.fit(embeddings_train, np.array(y_train),validation_data=(embeddings_test, np.array(y_test)), epochs=30) 4 mlp.save("model.pt") <ipython-input-153-a8d51af53bb5> in fit(self, x, y, epochs, batch_size, class_weight, validation_data, **kwargs) 134 val_loader = None 135 self.trainer = pl.Trainer(max_epochs=epochs, **kwargs) --> 136 self.trainer.fit(self, train_loader, val_loader) 137 def predict(self, x): 138 """Run inference on data.""" /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule) 456 ) 457 --> 458 self._run(model) 459 460 assert self.state.stopped /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model) 754 755 # dispatch `start_training` or `start_evaluating` or `start_predicting` --> 756 self.dispatch() 757 758 # plugin will finalized fitting (e.g. ddp_spawn will load trained model) /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in dispatch(self) 795 self.accelerator.start_predicting(self) 796 else: --> 797 self.accelerator.start_training(self) 798 799 def run_stage(self): /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py in start_training(self, trainer) 94 95 def start_training(self, trainer: 'pl.Trainer') -> None: ---> 96 self.training_type_plugin.start_training(trainer) 97 98 def start_evaluating(self, trainer: 'pl.Trainer') -> None: /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in start_training(self, trainer) 142 def start_training(self, trainer: 'pl.Trainer') -> None: 143 # double dispatch to initiate the training loop --> 144 self._results = trainer.run_stage() 145 146 def start_evaluating(self, trainer: 'pl.Trainer') -> None: /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_stage(self) 805 if self.predicting: 806 return self.run_predict() --> 807 return self.run_train() 808 809 def _pre_training_routine(self): /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_train(self) 840 self.progress_bar_callback.disable() 841 --> 842 self.run_sanity_check(self.lightning_module) 843 844 self.checkpoint_connector.has_trained = False /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_sanity_check(self, ref_model) 1105 1106 # run eval step -> 1107 self.run_evaluation() 1108 1109 self.on_sanity_check_end() /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_evaluation(self, on_epoch) 960 # lightning module methods 961 with self.profiler.profile("evaluation_step_and_end"): --> 962 output = self.evaluation_loop.evaluation_step(batch, batch_idx, dataloader_idx) 963 output = self.evaluation_loop.evaluation_step_end(output) 964 /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py in evaluation_step(self, batch, batch_idx, dataloader_idx) 172 model_ref._current_fx_name = "validation_step" 173 with self.trainer.profiler.profile("validation_step"): --> 174 output = self.trainer.accelerator.validation_step(args) 175 176 # capture any logged information /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py in validation_step(self, args) 224 225 with self.precision_plugin.val_step_context(), self.training_type_plugin.val_step_context(): --> 226 return self.training_type_plugin.validation_step(*args) 227 228 def test_step(self, args: List[Union[Any, int]]) -> Optional[STEP_OUTPUT]: /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in validation_step(self, *args, **kwargs) 159 160 def validation_step(self, *args, **kwargs): --> 161 return self.lightning_module.validation_step(*args, **kwargs) 162 163 def test_step(self, *args, **kwargs): <ipython-input-153-a8d51af53bb5> in validation_step(self, batch, batch_idx) 78 if self.output_shape > 1: 79 y_hat = torch.log(y_hat) ---> 80 loss = self.loss(y_hat, y) 81 self.log("val_loss", loss, on_epoch=True, on_step=False) 82 return {"val_loss": loss} /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 611 def forward(self, input: Tensor, target: Tensor) -> Tensor: 612 assert self.weight is None or isinstance(self.weight, Tensor) --> 613 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) 614 615 /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction) 2760 weight = weight.expand(new_size) 2761 -> 2762 return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum) 2763 2764 RuntimeError: all elements of input should be between 0 and 1 Error 2 --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-139-b7e8b13763ef> in <module> 1 # Model evaluation ----> 2 y_pred = mlp(embeddings_val).squeeze().detach().numpy() 3 model_evaluation_accuracy(np.array(y_val), y_pred) /opt/conda/envs/bio-transformers/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), <ipython-input-136-e2fc535640ab> in forward(self, x) 55 self.fc= nn.Linear(self.hidden_size, self.output_shape) 56 def forward(self, x): ---> 57 h0=torch.zeros(self.num_layers, x_size(0), self.hidden_size).to(device) 58 c0=torch.zeros(self.num_layers, x_size(0), self.hidden_size).to(device) 59 out, _=self.lstm(x,(h0, c0)) NameError: name 'x_size' is not defined  |
Batch xcopy YYYY/MM/DD HH/MM/SS Posted: 12 Jul 2021 09:45 AM PDT Anybody know know how to edit a Batch for me if ok for that i saw this somewhere which create a folder 12_07_2021_132247_19 etc but i like a more reader friendly like 2012-06-19 10:23:47, here the batch: :while set _my_datetime=%date%_%time% set _my_datetime=%_my_datetime: =_% set _my_datetime=%_my_datetime::=% set _my_datetime=%_my_datetime:/=_% set _my_datetime=%_my_datetime:.=_% xcopy "FOLDER NAME" ".\Backup\%_my_datetime%\" /E/H/C/I TIMEOUT 60 goto :while  |
Trying to add items to a TRadioGroup but no Item Editor or Items property is available Posted: 12 Jul 2021 09:45 AM PDT I'm trying to create a 4 radio button group on a form using the TRadioGroup tool. I can add the radio group but I can't add items to it. The documentation gives two ways of doing this. 1st is to right click the group box and select Item Editor and 2nd is to edit Items property in the Object Inspector. However I have no Item Editor selection available on right click and no Items property in the Object Inspector. Is there something in the setup that I've missed here or will I have to revert to a Groupbox and adding individual RadioButtons.  |
How can I restructure this data frame? Posted: 12 Jul 2021 09:46 AM PDT I want to change: Name | Year | Number of Visits | Bob | 2019 | 2 | Bob | 2020 | 3 | Sam | 2019 | 4 | Sam | 2020 | 1 | To: Name | 2019 | 2020 | Bob | 2 | 3 | Sam | 4 | 1 |  |
C# CS0029 Cannot implicitly convert type 'char[]' to 'string' Posted: 12 Jul 2021 09:47 AM PDT I am trying to make a code generator but I don't know how to fix this error: CS0029 Cannot implicitly convert type 'char[]' to 'string' var chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; var stringChars = new char[32]; var random = new Random(); for (int i = 0; i < stringChars.Length; i++) { stringChars[i] = chars[random.Next(chars.Length)]; } var finalString = new String(stringChars); string text = stringChars; Console.WriteLine("Generated Code: "); Console.WriteLine("ProjectName-ProjectName-" + text); It was supposed to generate a code like this: "ProjectName-ProjectName-32DigitCode" I would like to know how to make it work if anybody could help me with that.  |
split string in JS by multiple letters Posted: 12 Jul 2021 09:47 AM PDT I have a string of letters and I am trying to cut them into an array using split(), but I want it to split for multiple letters. I got it to work for an individual letter: str = "YFFGRDFDRFFRGGGKBBAKBKBBK"; str.split(/(?<=\R)/); //-> Works for only 'R' Result->[ 'YFFGR', 'DFDR', 'FFR', 'GGGKBBAKBKBBK' ] What I want: str = "YFFGRDFDRFFRGGGKBBAKBKBBK"; letters =['R', 'K', 'A']; // split by any of the three letters (R,K,A) Result->[ 'YFFGR', 'DFDR', 'FFR', 'GGGK', 'BBA', 'K', 'BK', 'BBK' ];  |
Why is VecDeque slower than a Vec? Posted: 12 Jul 2021 09:46 AM PDT I'm beginning to optimize performance of a crate, and I swapped out a Vec for a VecDeque . The container maintains elements in sorted order (it's supposed to be fairly small, so I didn't yet bother trying a heap) and is occasionally split down the middle into two separate containers (another reason I haven't yet tried a heap) with drain . I'd expect this second operation to be much faster: I can copy the first half of the collection out, then simply rotate and decrease the length of the original (now second) collection. However, when I run my #[bench] tests, performing the above operations a variable number of times, (below in million ns/iters) I observed a performance decrease with the VecDeque: | test a | test b | test c | test d | Vec | 12.6 | 5.9 | 5.9 | 3.8 | VecDeque | 13.6 | 8.9 | 7.3 | 5.8 | I've repeated and verified these results a number of times - why is it that performance decreases with this more flexible data structure?  |
Solution for bank robbery problem using python and machine learning [closed] Posted: 12 Jul 2021 09:46 AM PDT A bank robber entered the locker room of the bank with a master key which can open all the lockers in the room. All the lockers are arranged in a circular fashion. That means the first locker is next to the last one. Each locker has different amount of money. He dismantled the individual security alarm of each locker. Meanwhile, adjacent lockers have an additional security system connected. It will specially alert the security guard if two adjacent lockers are opened You are given an array where each element of the array represents the amount of money in each locker, Find the maximum amount he can take without alerting the guard.  |
Circe Decoding "AnyContentAsJson" type in Scala Play Posted: 12 Jul 2021 09:45 AM PDT I am trying to parse the body of an incoming request to JSON using Circe and Scala Play. The incoming request is coming from Postman as I'm running the service locally for testing purposes. The decode I am using is: decode[LoginRequest](request.body.toString) Whenever I run this, though, Circe throws an error because it receives "unexpected JSON", which is the JSON I'm expecting but wrapped in this object AnyContentAsJson Anyone know how I can fix this? Thank you!  |
Outlook Addin. - is possible Pin Panel by Default? Posted: 12 Jul 2021 09:46 AM PDT I'm working in a React/Angular/Node Project with the officeJS library, because we are programming addins and i have to make a Outlook pin panel addin by default, i mean, once the addin is opened it have to be pinneable in Outlook. I don't found information about it so, i hope your tips. Thanks in advanced.  |
Can't change ComboBox selection when bound to ObservableCollection (WPF) Posted: 12 Jul 2021 09:46 AM PDT I'm trying to create an edit form for editing properties of a custom set of TV Series objects. One of the properties holds a collection of all owned media formats (DVD, Blu-ray, etc) for that particular series that will be displayed in a ComboBox . Items are added to the ComboBox via a separate popup window and items are to be removed from the ComboBox by selecting the item and clicking a remove Button . I can add new entries to the MediaOwned ComboBox just fine, but when I try to select a specific ComboBox item to test the remove Button I find that I can only ever select the first entry. Can someone please tell me if I've missed something embarrassingly obvious, thanks. Here is the problematic property: private ObservableCollection<string> _mediaOwned = new ObservableCollection<string>(); public ObservableCollection<string> MediaOwned { get { return _mediaOwned; } set { _mediaOwned = value; OnPropertyChanged(new PropertyChangedEventArgs("MediaOwned")); } } Here are the other relevant code behind: private void Window_Loaded(object sender, RoutedEventArgs e) { // Create binding for the ListBox. Binding listBinding = new Binding(); listBinding.Source = show.Series; listBinding.Mode = BindingMode.OneWay; listBinding.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; lbSeries.SetBinding(ListBox.ItemsSourceProperty, listBinding); // Create binding for the ComboBox. Binding myBinding = new Binding(); myBinding.Path = new PropertyPath("MediaOwned"); myBinding.Mode = BindingMode.TwoWay; myBinding.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; cbMediaOwned.SetBinding(ComboBox.ItemsSourceProperty, myBinding); } private void btnRemoveMedia_Click(object sender, RoutedEventArgs e) { Series series = (Series)lbSeries.SelectedItem; series.MediaOwned.Remove(cbMediaOwned.Text); } And here is the XAML code: <Border Style="{StaticResource PanelBorderStyle}" DockPanel.Dock="Left" Margin="0,8,8,0" DataContext="{Binding ElementName=lbLists, Path=SelectedItem}"> <DockPanel VerticalAlignment="Top"> <StackPanel> <ListBox x:Name="lbSeries" Style="{StaticResource BasicListStyle}" Width="180" Height="300" DisplayMemberPath="Title" SelectionMode="Single" LayoutUpdated="lbSeries_LayoutUpdated"> </ListBox> </StackPanel> <StackPanel x:Name="editPanel" DataContext="{Binding ElementName=lbSeries, Path=SelectedItem}"> <StackPanel Orientation="Horizontal" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="0, 4, 0, 0"> <TextBlock Style="{StaticResource SmallFont}" Width="100">Title</TextBlock> <TextBox x:Name="txtTitle" Style="{StaticResource TextBoxStyle}" Text="{Binding Path=Title, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" Width="200" Margin="8, 8, 16, 8"></TextBox> </StackPanel> <StackPanel Orientation="Horizontal" HorizontalAlignment="Left" VerticalAlignment="Top"> <TextBlock Style="{StaticResource SmallFont}" Width="100">Media owned</TextBlock> <ComboBox x:Name="cbMediaOwned" Style="{StaticResource ComboBoxStyle}" Width="150" Margin="8,8,6,8" ></ComboBox> <Button x:Name="btnAddMedia" Style="{StaticResource ToolbarButtonStyle}" Click="btnAddMedia_Click" Margin="0"> <StackPanel ToolTip="Add media"> <Image Source="Images/add.png" /> </StackPanel> </Button> <Button x:Name="btnRemoveMedia" Style="{StaticResource ToolbarButtonStyle}" Click="btnRemoveMedia_Click" Margin="4"> <StackPanel ToolTip="Remove media"> <Image Source="Images/remove.png" /> </StackPanel> </Button> </StackPanel> </StackPanel> </DockPanel> </Border>  |
Convert String to Date to time ago String [duplicate] Posted: 12 Jul 2021 09:46 AM PDT I'm receiving a String from my API which is the object creation time. The format is specific so I can't determine how it can be converted. 2021-07-07T15:03:21.409Z I tried to convert it to a Date like this : extension String { func toDate(withFormat format: String = "yyyy-MM-dd HH:mm:ss") -> Date? { let dateFormatter = DateFormatter() dateFormatter.timeZone = TimeZone(identifier: "Europe/Paris") dateFormatter.locale = Locale(identifier: "fr-FR") dateFormatter.calendar = Calendar(identifier: .gregorian) dateFormatter.dateFormat = format let date = dateFormatter.date(from: self) return date } } But it's returning nil . I guess the withFormat part isn't good in my toDate() function but I'm unsure about it. Then I would like to convert it again to a time ago String like this (found on SO): extension Date { func timeAgoDisplay() -> String { let calendar = Calendar.current let minuteAgo = calendar.date(byAdding: .minute, value: -1, to: Date())! let hourAgo = calendar.date(byAdding: .hour, value: -1, to: Date())! let dayAgo = calendar.date(byAdding: .day, value: -1, to: Date())! let weekAgo = calendar.date(byAdding: .day, value: -7, to: Date())! if minuteAgo < self { let diff = Calendar.current.dateComponents([.second], from: self, to: Date()).second ?? 0 return "\(diff) sec ago" } else if hourAgo < self { let diff = Calendar.current.dateComponents([.minute], from: self, to: Date()).minute ?? 0 return "\(diff) min ago" } else if dayAgo < self { let diff = Calendar.current.dateComponents([.hour], from: self, to: Date()).hour ?? 0 return "\(diff) hrs ago" } else if weekAgo < self { let diff = Calendar.current.dateComponents([.day], from: self, to: Date()).day ?? 0 return "\(diff) days ago" } let diff = Calendar.current.dateComponents([.weekOfYear], from: self, to: Date()).weekOfYear ?? 0 return "\(diff) weeks ago" } }  |
TypeError: Cannot read property 'id' of undefined angular Posted: 12 Jul 2021 09:46 AM PDT I am trying to delete by key of a field from indexedDb which I am adding but unable to do so yet. I have to pass hard code key value but I want to pass key dynamically to the delete function. I am using Angular PWA app and Dexie.js library to do operation as going through a blog to add in online/offline sync. Although I am able to sync and add/delete all fields at a time . But how can I delete single field by passing dynamic key button click ?? Error -ERROR TypeError: Cannot read property 'id' of undefined on function deleteIndexDb() ALL OPERATIONS MUST BE WHEN OFFLINE service.ts export class TodoService { private todos: Todo[] = []; private db: any; constructor(private readonly onlineOfflineService: OnlineOfflineService) { this.registerToEvents(onlineOfflineService); this.createDatabase(); } addTodo(todo: Todo) { todo.id = UUID.UUID(); this.todos.push(todo); if (!this.onlineOfflineService.isOnline) { this.addToIndexedDb(todo); } } getAllTodos() { return this.todos; } private registerToEvents(onlineOfflineService: OnlineOfflineService) { onlineOfflineService.connectionChanged.subscribe(online => { if (online) { console.log('went online'); console.log('sending all stored items'); this.sendItemsFromIndexedDb(); } else { this.deleteIndexDb(); console.log('went offline, storing in indexdb'); } }); } private createDatabase() { this.db = new Dexie('TestDatabase'); this.db.version(1).stores({ todos: 'id,value' }); } private addToIndexedDb(todo: Todo) { this.db.todos .add(todo) .then(async () => { const allItems: Todo[] = await this.db.todos.toArray(); console.log('saved in DB, DB is now', allItems); }) .catch(e => { alert('Error: ' + (e.stack || e)); }); } deleteIndexDb() { const allItems: Todo[] = this.db.todos.toArray(); console.log(allItems[allItems.length-1].id); allItems.forEach((item: Todo) => { return this.db.todos.where('id').equals(item.id).delete() .then(function (deleteCount) { console.log("Deleted " + deleteCount + " rows"); console.log(`single item ${item.id} deleted locally`); }).catch(function (error) { console.error("Error: " + error); }); }); } private async sendItemsFromIndexedDb() { const allItems: Todo[] = await this.db.todos.toArray(); allItems.forEach((item: Todo) => { this.db.todos.delete(item.id).then(() => { console.log(`item ${item.id} sent and deleted locally`); }); }); } } Component:- todos: Todo[] = []; constructor(private readonly todoService: TodoService, public readonly onlineOfflineService: OnlineOfflineService){ this.form = new FormGroup({ value: new FormControl('', Validators.required) }); } deleteTodo() { <===== button click this.todoService.deleteIndexDb(); } HTML:- <div> <ul style="list-style-type: none;"> <li *ngFor="let item of todos" class="todo-item"> <span [ngClass]="{ inactive: item.done }">{{ item.value }}</span> <button class="todo-item-button" (click)="editUser()">Edit</button> <button class="todo-delete-button" (click)="deleteTodo()">Delete</button> </li> </ul> </div>  |
How to test instance variables in an ActionMailer object? Posted: 12 Jul 2021 09:46 AM PDT There is a before_action callback method in my ActionMailer object which is responsible for setting some instance variables. class TestMailer < ApplicationMailer before_action :set_params def send_test_mail mail(to: @email, subject: subject) end def set_params @account = account.email @date = some_action(account.updated_at) end end The question is How one can test these variables in a rspec test? some thing like: describe TestMailer do describe '#set_params' do described_class.with(account: account, subject: subject).send_test_mail.deliver_now expect(@date).to eq(Date.today) end end any clue would be highly appreciated.  |
How to return parsed result from fetch pipe feeding sax parser (node.js) Posted: 12 Jul 2021 09:46 AM PDT I have node.js code that fetches an XML feed, pipes it into a sax parser and extracts the data I need into a JS Object. The code, at its simplest level looks like this. const fetch = require('node-fetch') const sax = require('sax') function fetchAndParse(url) { const saxStream = require("sax").createStream(strict, {normalize: true, trim: true}) const desiredData = [] saxStream.on("error", function (err) { ... }).on("opentag", function (node) { ... }).on("cdata", function(t) { ... }).on("text", function(t) { ... }).on("closetag", function (nodeName) { ... }).on("end", function() { console.log("END", desiredData) }) fetch(url) .then( res => { res.body.pipe(saxStream) } ) } Ideally, I'd like to turn this into an async function and just use await when calling it, but at the moment I'm not seeing how to get the data out of here when it completes because the only place I've been able to access the finished data is in the on("end") function. I think I'm missing something really basic here.  |
Selenium print PDF in A4 format Posted: 12 Jul 2021 09:46 AM PDT I have the following code for printing to PDF (and it works), and I am using only Google Chrome for printing. def send_devtools(driver, command, params=None): # pylint: disable=protected-access if params is None: params = {} resource = "/session/%s/chromium/send_command_and_get_result" % driver.session_id url = driver.command_executor._url + resource body = json.dumps({"cmd": command, "params": params}) resp = driver.command_executor._request("POST", url, body) return resp.get("value") def export_pdf(driver): command = "Page.printToPDF" params = {"format": "A4"} result = send_devtools(driver, command, params) data = result.get("data") return data As we can see, I am using Page.printToPDF to print to base64, and passing "A4" as format on params paramenter. Unfortunately this parameter seems to be being ignored. I saw some code using puppeteer using it (format A4) and I thought that could help me. Even with hardcoded width and height (see bellow) I have no luck. "paperWidth": 8.27, # inches "paperHeight": 11.69, # inches Using the code above, is it possible to set the page to A4 format?  |
What is the Read Tag command for NFC Tag ISO 14443-3A type Posted: 12 Jul 2021 09:46 AM PDT |
Vuetify's autofocus works only on first modal open Posted: 12 Jul 2021 09:47 AM PDT I am trying to use Vuetify's v-text-field autofocus however it works only first time. After I close the dialog, it doesn't work anymore. This is what I am trying to do: <v-text-field ref="focus" autofocus></v-text-field> While googling I found out that it was a bug that was fixed in some version but they had temporary solution which I also tried: watch: { dialog: (val) -> if !val debugger requestAnimationFrame( => @$refs.focus.focus() ) } Am I doing something wrong or it is still a bug? Setting breakpoint I saw that it stops at that point. Can anybody lead me to the right direction? The only difference I have is that I am using Vuex and the dialog variable is in Vuex store. And the dialog is getter/setter. dialog: get: -> return this.$store.state.my_store.isDialogOpen set: (value) -> this.$store.commit('my_store/MY_MUTATION', value)  |
Hibernate: How to create a GenericDAO for CRUD methods Posted: 12 Jul 2021 09:47 AM PDT Im trying to create a GenericDAO for the basic CRUD methods so that I can reuse the code, but really have no clue how to start I already have a DAO for every class, and they work perfectly I read lots of tutorial, and downloaded projects, but i cant addapt (or understand) it to my program here is my Class: public class Cliente { private String nombre, direccion, telefono, cuit; private int codigo, codigoPostal; private double saldo, deuda; public Cliente(String nombre, String direccion, int codigoPostal, String telefono, String cuit) { this.nombre = nombre; this.direccion = direccion; this.codigoPostal = codigoPostal; this.telefono = telefono; this.cuit = cuit; this.saldo = 0; this.deuda = 0; } public Cliente(){ } //all the getters and setters this is my GenericDAO that is not working public class GenericDAO { @Resource(name = "sessionFactory") private SessionFactory sessionFactory; public <T> T save(final T o){ return (T) sessionFactory.getCurrentSession().save(o); } public void delete(final Object object){ sessionFactory.getCurrentSession().delete(object); } /***/ public <T> T get(final Class<T> type, final long id){ return (T) sessionFactory.getCurrentSession().get(type, id); } /***/ public <T> T merge(final T o) { return (T) sessionFactory.getCurrentSession().merge(o); } /***/ public <T> void saveOrUpdate(final T o){ sessionFactory.getCurrentSession().saveOrUpdate(o); } public <T> List<T> getAll(final Class<T> type) { final Session session = sessionFactory.getCurrentSession(); final Criteria crit = session.createCriteria(type); return crit.list(); } } my Class.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="principal.Cliente" table="Cliente"> <id name="codigo" column="codigo"> <generator class="identity" /> </id> <property name="nombre" type="string" column="nombre"/> <property name="direccion" type="string" column="direccion"/> <property name="telefono" type="string" column="telefono"/> <property name="cuit" type="string" column="cuit"/> <property name="codigoPostal" type="int" column="cp"/> <property name="saldo" type="double" column="saldo"/> <property name="deuda" type="double" column="deuda"/> </class> </hibernate-mapping> this is my HibernateUtil public class HibernateUtil { private static SessionFactory sessionFactory = buildSessionFactory(); private static SessionFactory buildSessionFactory() { try { if (sessionFactory == null) { Configuration configuration = new Configuration().configure(HibernateUtil.class.getResource("/hibernate.cfg.xml")); StandardServiceRegistryBuilder serviceRegistryBuilder = new StandardServiceRegistryBuilder(); serviceRegistryBuilder.applySettings(configuration.getProperties()); ServiceRegistry serviceRegistry = serviceRegistryBuilder.build(); sessionFactory = configuration.buildSessionFactory(serviceRegistry); } return sessionFactory; } catch (Throwable ex) { System.err.println("Initial SessionFactory creation failed: " + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { return sessionFactory; } public static void shutdown() { getSessionFactory().close(); } and my hibernate.cfg.xml <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <!-- Database connection settings --> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://localhost:3306/basededatosprueba</property> <property name="connection.username">root</property> <property name="connection.password"></property> <!-- JDBC connection pool (use the built-in) --> <property name="connection.pool_size">1</property> <!-- SQL dialect --> <property name="dialect">org.hibernate.dialect.MySQL5Dialect</property> <!-- Enable Hibernate's automatic session context management --> <property name="current_session_context_class">thread</property> <!-- Disable the second-level cache --> <property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <!-- Echo all executed SQL to stdout --> <property name="show_sql">true</property> <!-- Drop and re-create the database schema on startup --> <property name="hbm2ddl.auto">create</property> <mapping resource="mapeos/Cliente.hbm.xml"/> </session-factory> </hibernate-configuration> I'd appreciate if someone could explain how to make it work. Thanks  |
Cannot initialize SFTP protocol. Is the host running a SFTP server? WinSCP error Posted: 12 Jul 2021 09:46 AM PDT When I try to SSH into my cluster, there are two stages. So I have to enter the password twice to go to my home directory using SSH in a Linux terminal or PuTTY. But when I try to use WinSCP, I get these errors: Trying SFTP: Cannot initialize SFTP protocol. Is the host running a SFTP server? Trying SCP: Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended). How can I bypass this problem?  |
Access to the path 'C:\Users\xxx\Desktop' is denied Posted: 12 Jul 2021 09:47 AM PDT I have thoroughly searched the entire access denied questions and did't find any question related to access to windows form on my own system all the questions are related to web app. public partial class Form2 : Form { public Form2() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { byte[] imgdata; FileStream fsrw; string fname; openFileDialog1.Filter = "Sai Files(*.JPG;*.GIF)|*.jpg;*.gif|All files (*.*)|*.*"; openFileDialog1.ShowDialog();//opens the dialog box fname = openFileDialog1.FileName;//stores the file name in fname pictureBox1.ImageLocation = fname;//gives the image location to picturebox fsrw = new FileStream("C:\\Users\\Sainath\\Desktop", FileMode.Open, FileAccess.ReadWrite); imgdata = new byte[fsrw.Length]; fsrw.Read(imgdata, 0, Convert.ToInt32(fsrw.Length)); fsrw.Close(); string s = "insert into imagetest values(@p1,@p2)"; SqlConnection con = new SqlConnection("server=.;Data Source=.;Initial Catalog=Work;Integrated Security=True"); SqlCommand cmd = new SqlCommand(s, con); cmd.Parameters.AddWithValue("@p1", imgdata); cmd.Parameters.AddWithValue("@p2", fname); con.Open(); int i = cmd.ExecuteNonQuery(); con.Close(); Console.WriteLine(i); } }  |
What do you call a URL path without a host name? Posted: 12 Jul 2021 09:46 AM PDT Out of curiosity and the need to name a configuration setting properly: What do you call an URL that is an absolute path reference but without a domain? What would you call /path/to/myfile ? Is there a convention? Am I just daftly overlooking the obvious? "absolute path" would work in a file system context, but in a URL context I fear confusion with the full URL.  |
No comments:
Post a Comment