All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,   Ive tried to create a Pie Chart depicting the different Disks and it's free/used space. via trellis I want to show it by instance (the disk in question for example c:/)  now I've used t... See more...
Hello,   Ive tried to create a Pie Chart depicting the different Disks and it's free/used space. via trellis I want to show it by instance (the disk in question for example c:/)  now I've used the following spl to find the needed values for free and used(full) diskspace but it doesn't give me a correct pie chart. I'm fairly sure it is because I need to turn the headers or the fields in a way to be usable by the pie chart but I can't seem to find a good way how.    can anyone help me ?    Thanks a lot! André
Hi  Can someone please tell me how i can add the output of a search in the HTML Panel ?  Example : In the below attached dashboard , i want to add the value of the field Schedule in the HTML Pane... See more...
Hi  Can someone please tell me how i can add the output of a search in the HTML Panel ?  Example : In the below attached dashboard , i want to add the value of the field Schedule in the HTML Panel "Report : EOD"  Output Required : In the HTML Panel , we need the value as  Report : EOD                     T2S Synchronized Standard day (05/08/2024) in PPE Current code :  <row> <panel> <html> <div style="text-align:center;font-style:bold;color:blue;font-size:150%">Report : <div style="display:inline-block;text-align:center;font-style:bold;color:red;font-size:100%">EOD</div> </div> </html> </panel> </row> <row> <panel> <table> <search> <query>| inputlookup T2S-PPE-Calendar.csv | eval today = strftime(now(), "%d/%m/%Y") | where DATE = today | eval Schedule = Schedule." (".today.") in PPE" | fields Schedule</query> <earliest>1722776400.000</earliest> <latest>1722865326.000</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row>    
We have a scheduled report that passes data using "collect" & targeting an index which was running fine on schedule and the information was appearing in the index. It started only intermittently work... See more...
We have a scheduled report that passes data using "collect" & targeting an index which was running fine on schedule and the information was appearing in the index. It started only intermittently working and now the scheduled occurrences have stopped placing data into the index. The search is still perfectly functional and has results, I cannot work out why these are not being recorded. No change to the search used or the systems. Search used: | ldapsearch search="(&(objectClass=user)(!(objectClass=computer)))" attrs="pwdLastset,sAMAccountName,extensionAttribute8,info" | fields "_time", "extensionAttribute8", "pwdLastSet", "sAMAccountName","info" | where isnotnull('extensionAttribute8') | collect index="ldap_ad"   Tried adding 'spool=true' at the end and doing 'addinfo' prior to the collect, neither makes a difference to the search or the report, no data appears in ldap_ad
I have a CSV raw data which has files names and data inside the files which is seperated by double quotes and comma.  I am trying to create following regex  (^\"(?<file_name>\w.*)\"\,\"(?<links_emb>\... See more...
I have a CSV raw data which has files names and data inside the files which is seperated by double quotes and comma.  I am trying to create following regex  (^\"(?<file_name>\w.*)\"\,\"(?<links_emb>\w.*)\") which is taking results as one event and results. Due to which count is mismaching.  One event has multiple CSV data mentioned below and few events has one file name and data inside the file name.  One file containts multiple files types.  Can you help me with regex which can can take one line as one event.   "filename_Time15151515.html","http://testdata1.html" "filename_Time15151515.html","http://testdata2.gif" "filename_Time15151515.html",""http://testdata3.doc" "filename_Time15151515.html",""http://testdata4.xls" "filename_Time15151515.html",""http://testdata5.aspx"     ^\"(?<file_name>\w.*)\"\,\"(?<links_emb>\w.*)\"
how to get add a radio button or a checkbox for the user selection of Index and the Sourcetype, to select between logically AND or logical OR between the Index and the Sourcetype. This will allow to ... See more...
how to get add a radio button or a checkbox for the user selection of Index and the Sourcetype, to select between logically AND or logical OR between the Index and the Sourcetype. This will allow to view in one selection searches that could be performed by users either by the index or by the sourcetype.
I already edit the controller-info.xml and put the javaagent.jar in the application, but the application doesn't show after i restart the services. Why this happen?
I have following search. how can I add indexes information in the results: |tstats max(_time) as _time, where index=windows by host,index |append [|metadata type=hosts index=win index=linux  ] | e... See more...
I have following search. how can I add indexes information in the results: |tstats max(_time) as _time, where index=windows by host,index |append [|metadata type=hosts index=win index=linux  ] | eval now=now() | eval diff= now - lastTime | search diff > 18000 | eval notreportingsince=tostring(diff,"duration") | table host lastTime notreportingsince | convert ctime(lastTime) as lastTime | table host notreportingsince lastTime,index    
Hi all. We have several synthetics in a splunk Obervability cloud and I want to add their metrics in the Splunk enterprie that we have inside a private network. I have seen that with Splunk Infrast... See more...
Hi all. We have several synthetics in a splunk Obervability cloud and I want to add their metrics in the Splunk enterprie that we have inside a private network. I have seen that with Splunk Infrastructure Monitoring Add-on ,  i could ingest data, but it is not entirely clear to me how to configure it. Can anyone help me with the steps I need to follow to achieve this? I already have the Add-on installed in the HF and in the SearchHead of splunk Enterprise Thanks in advance. JAR
Hi -  I am looking to optimise this search by removing dedup, the idea of the search is to remove duplicate paymentId fields & create a table with the fields specified under the stats count. The ... See more...
Hi -  I am looking to optimise this search by removing dedup, the idea of the search is to remove duplicate paymentId fields & create a table with the fields specified under the stats count. The search currently takes 300+ seconds to run for 4hrs worth of data. AND card_issuer_stats AND acqRespCode!="0" AND acqRespCode!=85 AND acqRespCode!=100 AND gwyCode="test123" |dedup paymentId |stats count by acqCode,acqRespCode,mainBrand |sort -count Any help would be greatly appreciated, thanks
Hi Team, I am developing Splunk dashboard to provide the weekly restart status of all enterprise application servers. Since this dashboard is intended to show the restart status for last 7 days of h... See more...
Hi Team, I am developing Splunk dashboard to provide the weekly restart status of all enterprise application servers. Since this dashboard is intended to show the restart status for last 7 days of huge server count, totally 18 panels have been developed and using <condition match="****> option, I am hiding/un-hiding these panels for last 7 years. During validation, there's a delay in loading the panels due to performance factor and I am not sure, whether design approach followed is correct or not. Kindly advise how to effective improve the response time while loading the panels.
Hi Splunkers, I have the following tasks: I need to compare 2 different Splunk instances, that should be deployed in the same way, but should be not. So I have some sub tasks, to perform this checks.... See more...
Hi Splunkers, I have the following tasks: I need to compare 2 different Splunk instances, that should be deployed in the same way, but should be not. So I have some sub tasks, to perform this checks. One of them is this: in the first instances, some fields deployed by previous Splunk admin should be present (as you can imagine, if I'm here to ask for this, no documentation has been produced). Those field should have been replicated also on the second one, migrating some apps and addon on, but some of them could be in a missing state. So, the idea is: avoiding the most obvious way, whic is GUI-> Settings -> Field, is there another way to ask to Splunk: "hey, could list me all field that are inside you"'? The idea is a search, or recover them from command line, to obtain 2 file and compare them, for example 2 different txt/csv files.
I want to export results of a search in pdf format but it shows  
Hello, we have this issue that our splunk manager and search head after around 1 up to 2 weeks increase in RAM. The splunkd service and some python scripts are the ones that slowly increment the usag... See more...
Hello, we have this issue that our splunk manager and search head after around 1 up to 2 weeks increase in RAM. The splunkd service and some python scripts are the ones that slowly increment the usage of RAM over time and we had the issue that the splunkd service fails sometimes, since there is not enough ram for it to execute tasks.  We have 16GB of ram after I restart the splunkd service it goes down to 4.6 GB RAM in use. Now like I said this will increase slowly up to 15.8GB over time.  As you can see here it goes up to 12GB. Current splunk version: 9.2.0.1 Is there a known bug of memory leak for splunk itself ? Did somebody had the same issue already if so, how did you resolve this problem ? Thank you.
Hi everyone, I am referring to the documentation on High Availability Clustering Configuration, and I came across the following statement: "An eligible 'standby' cluster manager will not set its re... See more...
Hi everyone, I am referring to the documentation on High Availability Clustering Configuration, and I came across the following statement: "An eligible 'standby' cluster manager will not set its redundancy state to 'active' upon consecutive loss of heartbeat to the manager which is currently in active state. The administrator must manually change the cluster manager redundancy state to 'active' or 'standby'." Could someone please guide me on how to manually change the cluster manager redundancy state to 'active' or 'standby' in Splunk? What are the exact steps or commands required to perform this operation? Thank you in advance for your help!
Hi Guys, Hope you all the doing good. I have recently started to use Splunk ES and i am trying to create security incidents in ServiceNow for notables. I am using ServiceNow Security Operations... See more...
Hi Guys, Hope you all the doing good. I have recently started to use Splunk ES and i am trying to create security incidents in ServiceNow for notables. I am using ServiceNow Security Operations Integration addon for this and i have created a workflow action to create incident. I am using below search in the workflow action but i am not able to create any incidents. Please let me know if i am missing any thing. Thanks in advance. | expandtoken rule_title rule_description drilldown_searches | fields title rule_description src dest user file_path file_hash file_name _time source severity event_hash | eval src=coalesce(src, src_ip), dest = coalesce(dest, dest_ip) | fillnull value=N/A dvc src dest user file_path file_hash file_name | eval external_link = xyz | eval md5_hash = if(file_hash == "N/A", "N/A", if(len(file_hash) == 32, file_hash, "N/A")) | eval sha256_hash = if(file_hash == "N/A", "N/A", if(len(file_hash) == 64, file_hash, "N/A")) | eval snow_event_ts = strftime(_time, "%m-%d-%Y %H:%M:%S") | eval severity = case(severity=="informational", 0, severity=="low", 4, severity=="medium", 3, severity=="high", 2, severity=="critical", 1) | eval ticket_contents = "short_description \"".title."\"" | eval ticket_contents = ticket_contents." assignment_group \"ABC\"" | eval ticket_contents = ticket_contents." contact_type \"SIEM\"" | eval ticket_contents = ticket_contents." description \"".rule_description."\"" | eval ticket_contents = ticket_contents." source_ip \"".src."\"" | eval ticket_contents = ticket_contents." dest_ip \"".dest."\"" | eval ticket_contents = ticket_contents." u_offence_id \"".event_hash."\"" | eval ticket_contents = ticket_contents." u_source \"".src."\"" | eval ticket_contents = ticket_contents." u_destination \"".dest."\"" | eval ticket_contents = ticket_contents." u_md5_hash \"".md5_hash."\"" | eval ticket_contents = ticket_contents." u_sha256_hash \"".sha256_hash."\"" | eval ticket_contents = ticket_contents." u_event_timestamp \"".snow_event_ts."\"" | eval ticket_contents = ticket_contents." u_event_name \"".source."\"" | eval ticket_contents = ticket_contents." severity \"".severity."\"" | return $ticket_contents  
Does Splunk enterprises support Ubndu ?
I have a single value in a classic dashboard. When I click that, I would like it to navigate to another classic dashboard and show up a specific chart. Is this possible? If yes, how can this be achie... See more...
I have a single value in a classic dashboard. When I click that, I would like it to navigate to another classic dashboard and show up a specific chart. Is this possible? If yes, how can this be achieved? 
Hello Splunkers, I have the following query returning the search results, index="demo1" | search "metrics.job.overall_status"="FAILED" OR "metrics.job.overall_status"="PASSED" metrics.app="*" ... See more...
Hello Splunkers, I have the following query returning the search results, index="demo1" | search "metrics.job.overall_status"="FAILED" OR "metrics.job.overall_status"="PASSED" metrics.app="*" | eval timestamp=strftime(floor('metrics.job.end_ts'), "%Y-%m-%d %H:%M:%S") | sort 0 metrics.app timestamp | streamstats current=f last(metrics.job.overall_status) as prev_status last(timestamp) as prev_timestamp by metrics.app | fillnull value="NONE" prev_status | fillnull value="NONE" prev_timestamp | eval failed_timestamp=if(metrics.job.overall_status="FAILED" AND (prev_status="NONE" OR prev_status!="FAILED"), timestamp, null()) | table metrics.app, metrics.job.overall_status, prev_status, timestamp, prev_timestamp,failed_timestamp The result is null in every entry. What is wrong? even though there are FAILED status with the above specified conditions but the failed_timestamp results are null() can anyone please share how to correct this...
Hi, We have a map visualizatioin in splunk studio dashboard where we use marker type. We could configure tooltip where it shows the values when the marker is hovered on. Is it possible to show the... See more...
Hi, We have a map visualizatioin in splunk studio dashboard where we use marker type. We could configure tooltip where it shows the values when the marker is hovered on. Is it possible to show the tooltip by default always without hovering it. Thanks in advance
Hi everyone, I'm currently using VMware vRealize Log Insight to collect logs from ESXi hosts, vCenter servers, and NSX components. I then forward these logs to Splunk. However, I've noticed that Log... See more...
Hi everyone, I'm currently using VMware vRealize Log Insight to collect logs from ESXi hosts, vCenter servers, and NSX components. I then forward these logs to Splunk. However, I've noticed that Log Insight doesn't always parse logs correctly. I'm considering switching to direct integration using the Splunk Add-ons for VMware and NSX. My Questions: Log Volume Reduction: For those who have used Log Insight, what kind of log volume reduction have you achieved through filtering and aggregation before forwarding logs to Splunk? License Usage: How does the Splunk license usage compare between using Log Insight for pre-processing and direct ingestion with Splunk Add-ons? Best Practices: Are there any best practices or tips for optimizing Splunk license usage with either approach? Context: Current log volume: Approximately 300 GB per day (raw). Goals: Improve log parsing accuracy while optimize Splunk license usage. Any insights or experiences would be greatly appreciated! Thanks in advance!