All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I’m newly upgrading the platform. Need help we have a splunk cloud instance upgrade 9.1.however are in due to upgrade Deployment Server and Heavy forwarders followed by UF’s. could your please... See more...
Hi, I’m newly upgrading the platform. Need help we have a splunk cloud instance upgrade 9.1.however are in due to upgrade Deployment Server and Heavy forwarders followed by UF’s. could your please anyone kindly let me know the stepwise process to upgrade the deployment server from 9.0.5 to 9.1 and similarly for Heavy Forwarder . which are rpm packages to choose for both DS and HF ‘s.
Recently we got similar/same issue and it was solved using the following approach from the AppDynamics helpdesk: Problem statement: Machine Agent problem installation related to temp directory   ... See more...
Recently we got similar/same issue and it was solved using the following approach from the AppDynamics helpdesk: Problem statement: Machine Agent problem installation related to temp directory   This issue is typically caused by a noexec flag set on the temporary directory.    To resolve the issue please follow the steps: Create a directory called "tmp" in the MA home, the MA should have write permission on that directory. Run the command below (notice that we add the "-Djava.io.tmpdir" property, it will change the tmp dir that the MA use) nohup /opt/appdynamics/machine-agent/jre/bin/java -Djava.io.tmpdir=/opt/appdynamics/machine-agent/tmp -jar /opt/appdynamics/machine-agent/machineagent.jar &
This is the command if using Linux - /opt/splunk/bin/splunk resync shcluster-replicated-config (Run this on other SHC members – not captain)   Check the status first, runs on one of the SHC members ... See more...
This is the command if using Linux - /opt/splunk/bin/splunk resync shcluster-replicated-config (Run this on other SHC members – not captain)   Check the status first, runs on one of the SHC members /opt/splunk/bin/splunk show shcluster-status But might be worth trying a rolling restart and see if that helps, looks like the /var/run folder on the captain is having some kind of issue. Check Disk Space as well. du -sh /opt/splunk/var/run to get the size of the folder. Rolling restart command = /opt/splunk/bin/splunk rolling-restart shcluster-members from one of the members Monitor the rolling restart /opt/splunk/bin/splunk rolling-restart shcluster-members  -status 1 The captain may change – so observe – you can change the captain back to original /opt/splunk/bin/splunk transfer shcluster-captain -mgmt_uri <your SHC Captain> If this is production, then factor in maintenance window or do it when least busy for users, as it will be somewhat disruptive for searches, until it's resolved. But it It's worth going through the previous answers for this issue, the commands and various steps. Ensure you have backups and plan the steps for the procedure. At minimum /opt/splunk/etc which contains the configuration. https://community.splunk.com/t5/Deployment-Architecture/How-to-resolve-error-quot-Error-pulling-configurations-from-the/m-p/354231 https://community.splunk.com/t5/Deployment-Architecture/How-do-I-fix-quot-splunk-resync-shcluster-replicated-config-quot/m-p/212358
Do you mean something like this? | spath NewList{} output=NewList | table NewList | mvexpand NewList | spath input=NewList | fields - NewList
We have correation configured where we have selected 'Once' option but it is generating notable for each result instead of generating one notable only.
I have a case where the we have some associated metric for each request/response event , something like below: { "Key1" : "val", "Array12" : [ "val1", "val2" ], "NewList" : [ { "K1":"v11", "K2":"v1... See more...
I have a case where the we have some associated metric for each request/response event , something like below: { "Key1" : "val", "Array12" : [ "val1", "val2" ], "NewList" : [ { "K1":"v11", "K2":"v12", "K3":"v13" }, { "K1":"v21", "K2":"v22", "K3":"v23" } ] } Now this list , NewList is too big and having key-val pairs is making the log very bulky. Is there any way to make it consize and they be able to read this in a dashboard as below K1 , K2 , K3 V11,V12,V13 V21,V22,V23
Please can you repost your sample data in the correct format as what you posted does not match the structure show in your screen grab and is not valid JSON. Also, please paste into a code block </> t... See more...
Please can you repost your sample data in the correct format as what you posted does not match the structure show in your screen grab and is not valid JSON. Also, please paste into a code block </> to preserve format information.
Hi @anissabnk, this seems to be a json format, you could use INDEXED_EXTRACTIONS=json or the spath command https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath If anyway you wan... See more...
Hi @anissabnk, this seems to be a json format, you could use INDEXED_EXTRACTIONS=json or the spath command https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath If anyway you want to use a regex, you should use more regexes like the following: | rex "from\"\s*:\s*\"(?<from>[^\"]+)\"" that you can test at https://regex101.com/r/6NQsEb/1 | rex "to\"\s*:\s*\"(?<to>[^\"]+)\"" that you can test at https://regex101.com/r/6NQsEb/2 | rex "intensity\"\s*:\s*\(\"\w+\"\s*:\s*(?<intensity>\d+)" that you can test at https://regex101.com/r/6NQsEb/3 Ciao. Giuseppe    
Hi kawakazu, are you able to share the Data Model Wrangler app information (sharing properties) screenshot? If the sharing is listed as "App" , are you able to change it to "Global" and check if the... See more...
Hi kawakazu, are you able to share the Data Model Wrangler app information (sharing properties) screenshot? If the sharing is listed as "App" , are you able to change it to "Global" and check if the tags are still restricted to 30?
Which configuration file , do you mean connection timeout in the web.conf file?
Hello, I need your help with a field extraction. I have this type of data, and I'd like to extract the following fields with a rex command:   The syntax is as follows :  ("data": ["from... See more...
Hello, I need your help with a field extraction. I have this type of data, and I'd like to extract the following fields with a rex command:   The syntax is as follows :  ("data": ["from" : "2024-04-25T11: 30Z", "to": "2024-04-2512:00Z", "intensity": ("forecast": 152, "actual": null, "index": "moderate"}), ("from": "2024-04-25T12:002", "intensity": {"forecast": 152, "actual": null, "index": "moderate"}), ("from": "2024-04-25T12:30Z", "to": {"from": "2024-04-25T13:00Z", "to": "2024-04-25T12: 30Z", ("forecast": 164, "actual": null, "2024-04-2513: 30Z", "intensity": ("forecast": 154, "to": "2024-04-25T13: 002", "intensity": ("forecast": 154, "actual": null, "index": "actual": null, "index": "moderate"}), ("from": "2024-04-25T13:30Z*, "to": "2024-04-25T14:002", "moderate"}}, "intensity": 04-25T14: 30Z", "to" : "index": "moderate"3}, ("from": "2024-04-25T14:002*, "to": "2024-04-25T14:30Z", "intensity": ("forecast": 166, "actual": null, "index": "moderate"}), ("from": " 2024-04-25T15:00Z" "actual": nu11, "index" "intensity": {"forecast": 170, "actual": null, "index": "moderate"}), {"from": "2024-04-2515: 00Z", 2024- "to" : "moderate"}), {"from": "2024-04-25T15:30Z", "to": "2024-04-25T16:00Z", "intensity": ("forecast": 175, "to": "2024-04-25T15: 30Z", "intensity": {"forecast": 172, "2024-04-25T16: 30Z", "index": "moderate"}}, ("from": "2024-04-25T17:00Z", "to": "intensity": ("forecast": 177, "actual": null, "index": "moderate"?), ("from": "2024-04-2516: 302", "actual" : nu11, "index" "moderate"}}, ("from": "2024-04-2516: 00Z", "to": "2024-04-25T17:002", "intensity": ("forecast": 179, "actual": null, 2024-04-2517: 30Z", "intensity": ("forecast": 181, "actual": null, 25T18:00Z", "intensity": ("forecast": 184, "index": "moderate"}}, {"from": "2024-04-25T17: 30Z", "actual": null, "index": "moderate"}), ("from": "2024-04-25T18:002", "to": "2024-04-25T18: 30Z", "to": "2024-04- "moderate"}}, ("from": "2024-04-2518: 30Z", "to": "2024-04-25T19:002", "intensity": ("forecast": 187, "actual": null, "intensity": ("forecast": 190, "actua1": nul1, "index": "index": "high"}}, ("from": "intensity": {"forecast": 193, "actual": null, "index": "2024-04-25T19: 00Z", "to": "2024-04-25T19: 30Z" "high"}}, ("from": "2024-04-25T19:30Z", "to": "2024-04-25T20:00Z", "intensity": {"forecast": 194, "2024-04-2520: 00Z", "to": "2024-04-25T20:30Z", "intensity": {"forecast": 195, "actual": null, "index": "high"3}, ("from": "2024-04-25T20:30Z", "actual": null, "index": "high")}, ("from": "2024-04-25T21:00Z", "intensity": ("forecast": 198, "actual": null, "index": "high"'), ("from": "2024-04-25T21: 002", "2024-04-25T22: 00Z", "intensity": {"forecast": 187, "actual": null, "to": "2024-04-25T21: 30Z", "intensity": {"forecast": 196, 'actual": null, "index": "high"}}, {"from": "2024-04-25T21:302" "to" "index": "moderate"}}, ("from": "2024-04-25T22:00Z", "to": "2024-04-25T22: 30Z", "intensity": ("forecast": 181, "actual": null, "index": "moderate"}}, {"from": "2024-04-25T22:30Z", "to": "2024-04-25T23:002", "intensity": ("forecast": 180, "actual": null, "index' moderate"}},{"from": 25T23:30Z", "intensity": {"forecast": 172, "actual": null, "index": "moderate"}}, {"from": "2024-04-25T23: 30Z", "2024-04-25T23:002", "to": " 2024-04- "moderate"}}, {"from": "2024-04-26T00:00Z", "to": "2024-04-2600: 30Z", "intensity": ("forecast": 150, "to": "2024-04-2600: 00Z", "intensity": ("forecast": 150, "actual": null, "index": "actual": null, "index": "moderate")}, ("from": "2024-04-26T00: 302", "to": "2024-04-26T01:00Z" "intensity": {"forecast": 149, "actual": null, "index": "moderate"}}, ("from": "2024-04-26T01:002", "to": "2024-04-26T01:30Z", "intensity": {"forecast": 149, "actual": null, "index": "moderate"}}, ... Thank you very much 
Set up a "fake" to address and use that. However, you should consider that while Splunk may be able (to attempt) to send the email, the email system, either the senders or receivers, may deem a messa... See more...
Set up a "fake" to address and use that. However, you should consider that while Splunk may be able (to attempt) to send the email, the email system, either the senders or receivers, may deem a message with no real to address and only bcc address as spam and will junk the messages.
You could do something like this | eval sameday=if(relative_time(starttime,"@d")=relative_time(endtime,"@d"),"true","false")
Dear Splunk   I have a use case to send some notification/warning alert to those users who are met with some criteria in search. How can i send alert only to the members(identified in search) in B... See more...
Dear Splunk   I have a use case to send some notification/warning alert to those users who are met with some criteria in search. How can i send alert only to the members(identified in search) in BCC list as the alert configuration have a mandatory TO list (for at least one member ) which  do not required in the use case. simple , i want to set up an alert only with bcc'd users not anyone in "to" list
Hi All, how to write a query in Splunk to take two same days in a week only if the difference between the start day and end day is not more than 24 hours. For example - the two days can be Tuesday, bu... See more...
Hi All, how to write a query in Splunk to take two same days in a week only if the difference between the start day and end day is not more than 24 hours. For example - the two days can be Tuesday, but the query should check the difference between two Tuesdays is less than 24 hours, which means the end day hours and the starting day hours falls in the same Tuesday.
As Workaround i now used CSS to hide the "View on Mobile" button. .view-mobile {     display: none !important; }
messages shows the below: Search head cluster member A is having problems pulling configurations from the search head cluster captain B. Changes from the other members are not replicating to this me... See more...
messages shows the below: Search head cluster member A is having problems pulling configurations from the search head cluster captain B. Changes from the other members are not replicating to this member, and changes on this member are not replicating to other members. Consider performing a destructive configuration resync on this search head cluster member. any idea regarding the resync commands ??
It's the other way around - you might need table if you didn't have stats. If you do stats it produces a results table from your summarized events. And yes, join will probably be the way to go. As y... See more...
It's the other way around - you might need table if you didn't have stats. If you do stats it produces a results table from your summarized events. And yes, join will probably be the way to go. As you seem to have different sets of fields, you simply extract them _before_ doing stats values(interesting_field1) as interesting_field1 values(interesting_field2) as interesting_field2 [...] by common_field
Thanks in tonn for your prompt response
Hi @SampathkumarK , in addition to the other hints: in the bottom of the ServerClass form, you have the preview of clients in the serverclass, so you can immediately check if the whitelist is correc... See more...
Hi @SampathkumarK , in addition to the other hints: in the bottom of the ServerClass form, you have the preview of clients in the serverclass, so you can immediately check if the whitelist is correctly running. Then you can go in the clients form and see the serverclasses enabled for those clients. Ciao. Giuseppe