All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current... See more...
I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current and limit values in a radial gauge, how can I covert that table into key value pairs so I can say that the value of the radial is "storage_current"? something like  |eval {metric_name}={GB}
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers cl... See more...
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers clustered. I can clarify if this is unclear. Appreciate any advice and shared experiences.
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSel... See more...
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSelfStorage?_gl=1*gl0ykt*_ga*MTYyNzI0MDcwMC4xNzA1NDc1ODQ4*_ga_GS7YF8S63Y*MTcwNjAxMjQxMi4yLjEuMTcwNjAxMjUwNC42MC4wLjA.*_ga_5EPM2P39FV*MTcwNjAxMDg1NS4zLjEuMTcwNjAxMjUwNC4wLjAuMA..&_ga=2.175504743.433835015.1705989189-1627240700.1705475848#Restore_indexed_data_from_an_AWS_S3_bucket But we facing error like below? Any thought what might be the root cause? We did upload the data in the instructed directory but when rebuilding. we keep facing this error. 
How can I calculate CPU of the splunk server in percentage from the data in internal index? The data in internal index is as below where source = /opt/splunk/var/log/splunk/metrics.log 01-25-20... See more...
How can I calculate CPU of the splunk server in percentage from the data in internal index? The data in internal index is as below where source = /opt/splunk/var/log/splunk/metrics.log 01-25-2024 15:47:42.528 +0000 INFO Metrics - group=pipeline, name=dev-null, processor=nullqueue, cpu_seconds=0.001, executes=4445, cumulative_hits=9717713 01-25-2024 15:47:42.527 +0000 INFO Metrics - group=workload_management, name=workload-statistics, workload_pool=standard_perf, mem_limit_in_bytes=71715885056, cpu_shares=358 01-25-2024 15:47:42.525 +0000 INFO Metrics - group=conf, action=acquire_mutex, count=20, wallclock_ms_total=0, wallclock_ms_max=0, cpu_total=0.000, cpu_max=0.000
I have this vulnerability on all our instances on the last version of splunkforwarder The version of OpenSSL installed on the remote host is prior to 1.0.2zf. It is, therefore, affected by a vulne... See more...
I have this vulnerability on all our instances on the last version of splunkforwarder The version of OpenSSL installed on the remote host is prior to 1.0.2zf. It is, therefore, affected by a vulnerability as referenced in the 1.0.2zf advisory. identified in CVE-2022-1292, the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068) Any recommendation here
Hi all, We are going to the Splunk cloud and want to keep the LDAP search also in cloud. Today we have install the app on a search head and with working commands. I know how to forward the data ... See more...
Hi all, We are going to the Splunk cloud and want to keep the LDAP search also in cloud. Today we have install the app on a search head and with working commands. I know how to forward the data to splunk cloud from a HF, but what about the ldap command? Like ldapgroup etc? do we need to install the app in Cloud also to get the commands to work? //Jan
I have enterprise network and we have Splunk enterprise license.  Question: while troubleshooting source type or host if checking, it needs to show past history of particular user or source in the d... See more...
I have enterprise network and we have Splunk enterprise license.  Question: while troubleshooting source type or host if checking, it needs to show past history of particular user or source in the dashboard. Past history like how many alerts triggered the same user, those alert details if click the link it must be show past troubleshooting history.
"I need help with this XML for a dashboard; essentially, I need to call a token that modifies data within a report, having already created the token with the name 'data.' How can I do this?"   <for... See more...
"I need help with this XML for a dashboard; essentially, I need to call a token that modifies data within a report, having already created the token with the name 'data.' How can I do this?"   <form version="1.1">   <label>Lista IP da bloccare</label>   <fieldset submitButton="true" autoRun="false">     <input type="time" token="data">       <label></label>       <default>         <earliest>rt-24h</earliest>         <latest>rt</latest>       </default>     </input>   </fieldset>   <row>     <panel>       <table>         <search ref="checkpoint1"></search>         <option name="drilldown">none</option>       </table>     </panel>   </row> </form>
Hello, I'm looking of your insights to pinpoint changes in fields over time. Events structured with timestamp, ID, and various fields. Seeking advice on constructing a dynamic timeline to identify a... See more...
Hello, I'm looking of your insights to pinpoint changes in fields over time. Events structured with timestamp, ID, and various fields. Seeking advice on constructing a dynamic timeline to identify altered values and corresponding fields. Example events below:   10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a="something" b=253 c=385 ... 10:21:35 25/Jan/2024 id=2 a="something" b=253 c=385 ... 10:22:56 25/Jan/2024 id=2 a="xyz" b="-" c=385 ... Desired result format: 10:20:56 25/Jan/2024 id=1 changed field "c" 10:22:56 25/Jan/2024 id=2 changed field "a", changed field "b" My pseudo SPL to find changed events: ... | streamstats reset_on_change=true dc(*) AS * by id | foreach * [ ??? ] With hundreds of fields per event, seeking efficient method - considering a combination of streamstats, foreach, transaction or stats. Insights appreciated.
Hi All, I  have created an alert that  looks for instances with no proper tags . The search in alert  will return instance name and  instance owner.  On scheduled time,  email notification is gett... See more...
Hi All, I  have created an alert that  looks for instances with no proper tags . The search in alert  will return instance name and  instance owner.  On scheduled time,  email notification is getting sent to all owners with the csv file attached.  I am using action.email.to=$result.email_address$ (dynamic email address returned from search). Through this, the email  notification is getting sent successfully to all users in $result.email_address$ but is getting sent separately. I want all of the users to be in to field , so that one email will be sent. Please let me know how we are achieving this ? Regards, PNV
Hello everyone ,   I need to onboard a huge amount of logs which the 90% of them is unnecessary . My goal is to ingest only some keywords like "Login Failed", "User Login " etc . I have seen other ... See more...
Hello everyone ,   I need to onboard a huge amount of logs which the 90% of them is unnecessary . My goal is to ingest only some keywords like "Login Failed", "User Login " etc . I have seen other articles  explaining how you can filter events by exclusion using NullQueue . But that doesn't fit in my case because I only know which event I want to ingest using particular keywords.  I am looking forward for a hint on how can I procced on that if it's possible .  Thank you all 
As mentioned in the subject, help me with the keyboard shortcut to format html and xml code in dashboard source code editor. For example, I want below code to be formatted to the code as shown in "T... See more...
As mentioned in the subject, help me with the keyboard shortcut to format html and xml code in dashboard source code editor. For example, I want below code to be formatted to the code as shown in "To" section: <dashboard version="1.1"> <label>Test Dashboard</label> <row> <panel> <html> <h1> <b>Some bold text</b> </h1> </html> </panel> </row> </dashboard>   To: <dashboard version="1.1"> <label>Test Dashboard</label> <row> <panel> <html> <h1> <b>Some bold text</b> </h1> </html> </panel> </row> </dashboard>   I tried using Ctrl+Shift+F but it is only formatting the XML code in the dashboard source code editor and html code in the dashboard source code editor is not getting formatted and remaining as is.
I want to know which saved search is generating a particular lookup , How do I do that?
Hi Splunkers,   i already done configuration of HF and install uf credentials. but i can't see the logs of palo alto in Splunk Cloud    for HF   Inputs.conf [udp://5000] index = xxxxx_pan di... See more...
Hi Splunkers,   i already done configuration of HF and install uf credentials. but i can't see the logs of palo alto in Splunk Cloud    for HF   Inputs.conf [udp://5000] index = xxxxx_pan disabled = false sourcetype = pan_log   but HF and Splunk Cloud instance have communicating.    please help me 
Hi All, I've been exploring various documentation and tutorials, but I'd love to hear from those who have hands-on experience. What are the best practices and recommended steps for configuring Kuber... See more...
Hi All, I've been exploring various documentation and tutorials, but I'd love to hear from those who have hands-on experience. What are the best practices and recommended steps for configuring Kubernetes logs to seamlessly integrate with Splunk Enterprise? Are there any specific considerations or challenges I should be aware of during the setup process? Thanks in advance for sharing your expertise!
Hi Mentors, I have searched in youtube, external sources to check for usecase creation. i could see by using splunk essential we could create the usecase but i am new to the splunk and know about ... See more...
Hi Mentors, I have searched in youtube, external sources to check for usecase creation. i could see by using splunk essential we could create the usecase but i am new to the splunk and know about only the basics of splunk like fields and commands etc. i have asked even literally everyone treated me in a bad way when i ask them to teach me how to create the usecase even my team leader is also not ready to teach me but he got learned in the institution 5 years back. i dont have any friends who has knowledge in splunk.  I beg u please any one please teach me how to create the usecase in splunk and what is the basic of creating the usecase. i need this ......i am a first graduate from my family and i cannot afford huge amount of fees to learn splunk. Any mentors please help me i want to learn splunk and i want to teach the same to my team members in a simple in which way they could understand....... Please help me Mentors....
We are looking into Splunk Cloud as a solution, instead of our regular Splunk Enterprise (On Premise) Setup. To be able to test the feasibility of sending data from external sources (Jira Cloud, Red... See more...
We are looking into Splunk Cloud as a solution, instead of our regular Splunk Enterprise (On Premise) Setup. To be able to test the feasibility of sending data from external sources (Jira Cloud, Redmine) , I wanted to install our own Custom App for testing. Unfortunately, there is no current way for us to do that in the Free Trial I have. We simply want to test if we can index data from Jira Cloud to Splunk Cloud, without having to use a Heavy Forwarder or Universal Forwarder. Ways to replicate: Splunk Cloud Login > Manage Apps > No button for uploading a custom app / add-on. Questions:  1. Is there a way to directly install Custom Apps / Add-ons (that are originally built for Splunk Enterprise), in Splunk Cloud? We were thinking about compatibility issues, and if the apps would work the same way.  2. Is there a way to gauge whether or not the quantity of data that we want to send from external sources, would require us to install a Heavy / Universal Forwarder? (We are trying to avoid additional costs by taking Splunk Cloud, so we were wondering if we could do without them)  
Hi all, I have read through the splunk documentation for session timeout here, but these seems to be for splunk overall. Configure user session timeouts - Splunk Documentation However I am not a... See more...
Hi all, I have read through the splunk documentation for session timeout here, but these seems to be for splunk overall. Configure user session timeouts - Splunk Documentation However I am not able to make the user timeout to be user or role specific. Is there any solution for that? I read in one of the posts previously that someone is able to perform such role-specific wise which also works for me if it can be done: Session timeout for a Single username (not group) ... - Splunk Community However there wasn't much information on how he/she was able to do so. The main intent for me is to set admin users / normal users the usual timeout eg 1 hour, and dashboard accounts to not be logged out on the other hand. Any possible advice, as it seems to be doable.
How to correlate index with dbxquery with condition or interation? See the sample below.   Thank you for your help. index=company CompanyID CompanyName Revenue A CompanyA 3,000,000 ... See more...
How to correlate index with dbxquery with condition or interation? See the sample below.   Thank you for your help. index=company CompanyID CompanyName Revenue A CompanyA 3,000,000 B CompanyB 2,000,000 C CompanyC 1,000,000 |  dbxquery query="select * from employee where companyID in (A,B,C)" OR  Iteration: |  dbxquery query="select * from employee where companyID ='A' |  dbxquery query="select * from employee where companyID ='B' |  dbxquery query="select * from employee where companyID ='B' CompanyID EmployeeName EmployeeEmail A EmployeeA1 empA1@email.com A EmployeeA2 empA2@email.com A EmployeeA3 empA2@email.com B EmployeeB1 empB1@email.com B EmployeeB2 empB2@email.com B EmployeeB3 empB3@email.com C EmployeeC1 empC1@email.com C EmployeeC2 empC2@email.com C EmployeeC3 empC3@email.com Expected result: CompanyID CompanyName Revenue EmployeeName EmployeeEmail A CompanyA 3,000,000 EmployeeA1 empA1@email.com A CompanyA 3,000,000 EmployeeA2 empA2@email.com A CompanyA 3,000,000 EmployeeA3 empA2@email.com B CompanyB 2,000,000 EmployeeB1 empB1@email.com B CompanyB 2,000,000 EmployeeB2 empB2@email.com B CompanyB 2,000,000 EmployeeB3 empB3@email.com C CompanyC 1,000,000 EmployeeC1 empC1@email.com C CompanyC 1,000,000 EmployeeC2 empC2@email.com C CompanyC 1,000,000 EmployeeC3 empC3@email.com OR  CompanyID CompanyName Revenue EmployeeName EmployeeEmail A CompanyA 3,000,000 EmployeeA1, EmployeeA2, EmployeeA3 empA1@email.com, empA2@email.com, empA2@email.com B CompanyB 2,000,000 EmployeeB1, EmployeeB2, EmployeeB3 empB1@email.com, empB2@email.com, empB3@email.com C CompanyC 1,000,000 EmployeeC1, EmployeeC2, EmployeeC3 empC1@email.com, empC2@email.com, empC3@email.com
I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend... See more...
I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend, the results are always expired when checking them after 48 hours. Is it possibe to have the alert results not expire in 48hrs?