All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A Google search for "destructive configuration resync" finds this in Docs: https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSearch/HowconfrepoworksinSHC#Why_a_recovering_member_might_need_to_re... See more...
A Google search for "destructive configuration resync" finds this in Docs: https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSearch/HowconfrepoworksinSHC#Why_a_recovering_member_might_need_to_resync_manually
Hi @Sachin, I’m a Community Moderator in the Splunk Community. This question was posted 9 years ago, so it might not get the attention you need for your question to be answered. We recommend that... See more...
Hi @Sachin, I’m a Community Moderator in the Splunk Community. This question was posted 9 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
The where command does not handle wildcards.  Instead, use the search command. The values function produces multi-value fields, which require special handling. Try  this query. index=mulesoft envi... See more...
The where command does not handle wildcards.  Instead, use the search command. The values function produces multi-value fields, which require special handling. Try  this query. index=mulesoft environment=* applicationName IN ("processor","api") message!="No files found for*" | stats values(content.InterfaceName) as InterfaceName values(content.Error) as error values(message) as message values(priority) as priority min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY applicationName,correlationId | where isnotnull(mvfind(InterfaceName, "Test")) | table Status InterfaceName applicationName Timestamp "Total Elapsed Time" FileList "SuccessFile/FailureFile" Response correlationId  
Please show exactly what you tried and tell how the results were not what was expected.
Either regex should work.  BTW, it's not necessary to escape the colons. Any change to props.conf only affects new data.  Config changes made in the UI take effect immediately; changes made to .conf... See more...
Either regex should work.  BTW, it's not necessary to escape the colons. Any change to props.conf only affects new data.  Config changes made in the UI take effect immediately; changes made to .conf files take effect after a restart.
Hi All, How to exclude particular values of fields in this query.In my scenario if message having "file not found" so i dont want to show the transactions. below is the query i tried to exclude.   ... See more...
Hi All, How to exclude particular values of fields in this query.In my scenario if message having "file not found" so i dont want to show the transactions. below is the query i tried to exclude.   index=mulesoft environment=* applicationName IN ("processor","api")|where message!="No files found for*" | stats values(content.InterfaceName) as InterfaceName values(content.Error) as error values(message) as message values(priority) as priority min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY applicationName,correlationId | table Status InterfaceName applicationName Timestamp "Total Elapsed Time" FileList "SuccessFile/FailureFile" Response correlationId | search InterfaceName IN ("Test") And i tried | search NOT message IN ("No files found for*")    
Hello , Using the below query i am able to get title and Definition of macros . |rest /servicesNS/-/-/admin/macros |table title,definition Can this same be achievable using https://*****:8089/... See more...
Hello , Using the below query i am able to get title and Definition of macros . |rest /servicesNS/-/-/admin/macros |table title,definition Can this same be achievable using https://*****:8089/servicesNS/-/-/admin/macros?output_mode=json  postman call , that i will get only title and definition in response of an api call . i tried using filter  f, search as per the documentation but its not giving required response  Thanks In advance
Hi, I’m newly upgrading the platform. Need help we have a splunk cloud instance upgrade 9.1.however are in due to upgrade Deployment Server and Heavy forwarders followed by UF’s. could your please... See more...
Hi, I’m newly upgrading the platform. Need help we have a splunk cloud instance upgrade 9.1.however are in due to upgrade Deployment Server and Heavy forwarders followed by UF’s. could your please anyone kindly let me know the stepwise process to upgrade the deployment server from 9.0.5 to 9.1 and similarly for Heavy Forwarder . which are rpm packages to choose for both DS and HF ‘s.
Recently we got similar/same issue and it was solved using the following approach from the AppDynamics helpdesk: Problem statement: Machine Agent problem installation related to temp directory   ... See more...
Recently we got similar/same issue and it was solved using the following approach from the AppDynamics helpdesk: Problem statement: Machine Agent problem installation related to temp directory   This issue is typically caused by a noexec flag set on the temporary directory.    To resolve the issue please follow the steps: Create a directory called "tmp" in the MA home, the MA should have write permission on that directory. Run the command below (notice that we add the "-Djava.io.tmpdir" property, it will change the tmp dir that the MA use) nohup /opt/appdynamics/machine-agent/jre/bin/java -Djava.io.tmpdir=/opt/appdynamics/machine-agent/tmp -jar /opt/appdynamics/machine-agent/machineagent.jar &
This is the command if using Linux - /opt/splunk/bin/splunk resync shcluster-replicated-config (Run this on other SHC members – not captain)   Check the status first, runs on one of the SHC members ... See more...
This is the command if using Linux - /opt/splunk/bin/splunk resync shcluster-replicated-config (Run this on other SHC members – not captain)   Check the status first, runs on one of the SHC members /opt/splunk/bin/splunk show shcluster-status But might be worth trying a rolling restart and see if that helps, looks like the /var/run folder on the captain is having some kind of issue. Check Disk Space as well. du -sh /opt/splunk/var/run to get the size of the folder. Rolling restart command = /opt/splunk/bin/splunk rolling-restart shcluster-members from one of the members Monitor the rolling restart /opt/splunk/bin/splunk rolling-restart shcluster-members  -status 1 The captain may change – so observe – you can change the captain back to original /opt/splunk/bin/splunk transfer shcluster-captain -mgmt_uri <your SHC Captain> If this is production, then factor in maintenance window or do it when least busy for users, as it will be somewhat disruptive for searches, until it's resolved. But it It's worth going through the previous answers for this issue, the commands and various steps. Ensure you have backups and plan the steps for the procedure. At minimum /opt/splunk/etc which contains the configuration. https://community.splunk.com/t5/Deployment-Architecture/How-to-resolve-error-quot-Error-pulling-configurations-from-the/m-p/354231 https://community.splunk.com/t5/Deployment-Architecture/How-do-I-fix-quot-splunk-resync-shcluster-replicated-config-quot/m-p/212358
Do you mean something like this? | spath NewList{} output=NewList | table NewList | mvexpand NewList | spath input=NewList | fields - NewList
We have correation configured where we have selected 'Once' option but it is generating notable for each result instead of generating one notable only.
I have a case where the we have some associated metric for each request/response event , something like below: { "Key1" : "val", "Array12" : [ "val1", "val2" ], "NewList" : [ { "K1":"v11", "K2":"v1... See more...
I have a case where the we have some associated metric for each request/response event , something like below: { "Key1" : "val", "Array12" : [ "val1", "val2" ], "NewList" : [ { "K1":"v11", "K2":"v12", "K3":"v13" }, { "K1":"v21", "K2":"v22", "K3":"v23" } ] } Now this list , NewList is too big and having key-val pairs is making the log very bulky. Is there any way to make it consize and they be able to read this in a dashboard as below K1 , K2 , K3 V11,V12,V13 V21,V22,V23
Please can you repost your sample data in the correct format as what you posted does not match the structure show in your screen grab and is not valid JSON. Also, please paste into a code block </> t... See more...
Please can you repost your sample data in the correct format as what you posted does not match the structure show in your screen grab and is not valid JSON. Also, please paste into a code block </> to preserve format information.
Hi @anissabnk, this seems to be a json format, you could use INDEXED_EXTRACTIONS=json or the spath command https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath If anyway you wan... See more...
Hi @anissabnk, this seems to be a json format, you could use INDEXED_EXTRACTIONS=json or the spath command https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath If anyway you want to use a regex, you should use more regexes like the following: | rex "from\"\s*:\s*\"(?<from>[^\"]+)\"" that you can test at https://regex101.com/r/6NQsEb/1 | rex "to\"\s*:\s*\"(?<to>[^\"]+)\"" that you can test at https://regex101.com/r/6NQsEb/2 | rex "intensity\"\s*:\s*\(\"\w+\"\s*:\s*(?<intensity>\d+)" that you can test at https://regex101.com/r/6NQsEb/3 Ciao. Giuseppe    
Hi kawakazu, are you able to share the Data Model Wrangler app information (sharing properties) screenshot? If the sharing is listed as "App" , are you able to change it to "Global" and check if the... See more...
Hi kawakazu, are you able to share the Data Model Wrangler app information (sharing properties) screenshot? If the sharing is listed as "App" , are you able to change it to "Global" and check if the tags are still restricted to 30?
Which configuration file , do you mean connection timeout in the web.conf file?
Hello, I need your help with a field extraction. I have this type of data, and I'd like to extract the following fields with a rex command:   The syntax is as follows :  ("data": ["from... See more...
Hello, I need your help with a field extraction. I have this type of data, and I'd like to extract the following fields with a rex command:   The syntax is as follows :  ("data": ["from" : "2024-04-25T11: 30Z", "to": "2024-04-2512:00Z", "intensity": ("forecast": 152, "actual": null, "index": "moderate"}), ("from": "2024-04-25T12:002", "intensity": {"forecast": 152, "actual": null, "index": "moderate"}), ("from": "2024-04-25T12:30Z", "to": {"from": "2024-04-25T13:00Z", "to": "2024-04-25T12: 30Z", ("forecast": 164, "actual": null, "2024-04-2513: 30Z", "intensity": ("forecast": 154, "to": "2024-04-25T13: 002", "intensity": ("forecast": 154, "actual": null, "index": "actual": null, "index": "moderate"}), ("from": "2024-04-25T13:30Z*, "to": "2024-04-25T14:002", "moderate"}}, "intensity": 04-25T14: 30Z", "to" : "index": "moderate"3}, ("from": "2024-04-25T14:002*, "to": "2024-04-25T14:30Z", "intensity": ("forecast": 166, "actual": null, "index": "moderate"}), ("from": " 2024-04-25T15:00Z" "actual": nu11, "index" "intensity": {"forecast": 170, "actual": null, "index": "moderate"}), {"from": "2024-04-2515: 00Z", 2024- "to" : "moderate"}), {"from": "2024-04-25T15:30Z", "to": "2024-04-25T16:00Z", "intensity": ("forecast": 175, "to": "2024-04-25T15: 30Z", "intensity": {"forecast": 172, "2024-04-25T16: 30Z", "index": "moderate"}}, ("from": "2024-04-25T17:00Z", "to": "intensity": ("forecast": 177, "actual": null, "index": "moderate"?), ("from": "2024-04-2516: 302", "actual" : nu11, "index" "moderate"}}, ("from": "2024-04-2516: 00Z", "to": "2024-04-25T17:002", "intensity": ("forecast": 179, "actual": null, 2024-04-2517: 30Z", "intensity": ("forecast": 181, "actual": null, 25T18:00Z", "intensity": ("forecast": 184, "index": "moderate"}}, {"from": "2024-04-25T17: 30Z", "actual": null, "index": "moderate"}), ("from": "2024-04-25T18:002", "to": "2024-04-25T18: 30Z", "to": "2024-04- "moderate"}}, ("from": "2024-04-2518: 30Z", "to": "2024-04-25T19:002", "intensity": ("forecast": 187, "actual": null, "intensity": ("forecast": 190, "actua1": nul1, "index": "index": "high"}}, ("from": "intensity": {"forecast": 193, "actual": null, "index": "2024-04-25T19: 00Z", "to": "2024-04-25T19: 30Z" "high"}}, ("from": "2024-04-25T19:30Z", "to": "2024-04-25T20:00Z", "intensity": {"forecast": 194, "2024-04-2520: 00Z", "to": "2024-04-25T20:30Z", "intensity": {"forecast": 195, "actual": null, "index": "high"3}, ("from": "2024-04-25T20:30Z", "actual": null, "index": "high")}, ("from": "2024-04-25T21:00Z", "intensity": ("forecast": 198, "actual": null, "index": "high"'), ("from": "2024-04-25T21: 002", "2024-04-25T22: 00Z", "intensity": {"forecast": 187, "actual": null, "to": "2024-04-25T21: 30Z", "intensity": {"forecast": 196, 'actual": null, "index": "high"}}, {"from": "2024-04-25T21:302" "to" "index": "moderate"}}, ("from": "2024-04-25T22:00Z", "to": "2024-04-25T22: 30Z", "intensity": ("forecast": 181, "actual": null, "index": "moderate"}}, {"from": "2024-04-25T22:30Z", "to": "2024-04-25T23:002", "intensity": ("forecast": 180, "actual": null, "index' moderate"}},{"from": 25T23:30Z", "intensity": {"forecast": 172, "actual": null, "index": "moderate"}}, {"from": "2024-04-25T23: 30Z", "2024-04-25T23:002", "to": " 2024-04- "moderate"}}, {"from": "2024-04-26T00:00Z", "to": "2024-04-2600: 30Z", "intensity": ("forecast": 150, "to": "2024-04-2600: 00Z", "intensity": ("forecast": 150, "actual": null, "index": "actual": null, "index": "moderate")}, ("from": "2024-04-26T00: 302", "to": "2024-04-26T01:00Z" "intensity": {"forecast": 149, "actual": null, "index": "moderate"}}, ("from": "2024-04-26T01:002", "to": "2024-04-26T01:30Z", "intensity": {"forecast": 149, "actual": null, "index": "moderate"}}, ... Thank you very much 
Set up a "fake" to address and use that. However, you should consider that while Splunk may be able (to attempt) to send the email, the email system, either the senders or receivers, may deem a messa... See more...
Set up a "fake" to address and use that. However, you should consider that while Splunk may be able (to attempt) to send the email, the email system, either the senders or receivers, may deem a message with no real to address and only bcc address as spam and will junk the messages.
You could do something like this | eval sameday=if(relative_time(starttime,"@d")=relative_time(endtime,"@d"),"true","false")