All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to understand which event types each search result record belongs to. My search: index="a" AND eventtype="*" I want the results to contain a field with a list of matching event types. ... See more...
I need to understand which event types each search result record belongs to. My search: index="a" AND eventtype="*" I want the results to contain a field with a list of matching event types. It would be ok for me to have a table with columns _raw and eventtypes. We have 10k+ event types and thousands of events. Is it possible to achieve? Thanks.
Hello Splunk Experts, I'm searching for ERRORS and WARN in the application from different servers and trying to collect these log lines to a stored area(Summary Index - may be Sourcetype) to avoid ... See more...
Hello Splunk Experts, I'm searching for ERRORS and WARN in the application from different servers and trying to collect these log lines to a stored area(Summary Index - may be Sourcetype) to avoid searching again & again on a huge volume of data. I don't want to use lookup because of the data volume. What is the procedure to get this done. Could someone please assist. Thanks in advance!!
Hi, I inherited a Splunk Enterprise environment. It is composed of 10 machines, divided into development and production (the latter with 2 clustered indexes). One machine serves as the Monitoring... See more...
Hi, I inherited a Splunk Enterprise environment. It is composed of 10 machines, divided into development and production (the latter with 2 clustered indexes). One machine serves as the Monitoring Console. I find an app (with several reports) present on both development and production. The report has a cron job 44 * * * * in production and 12 1/13 * * * in development and produces a KVStore lookup (with the exact same name as the report). Other reports in other apps make use of the lookup. On the Monitoring Console Search>Scheduler Activity>Scheduler Activity:Instance "Aggregate Scheduled Search Runtime" chart I see that same report displaying >60 Runtime(seconds) in 1 minute bins. How is that possible if the lookup (and not the report) is scheduled to run? If I click on the 1-minute bar on the chart, the drill-down opens another chart with, among others, fields PID, PPID as well as Elapsed Time (e.g. 744617.8700 within 50 seconds! Are these seconds at all?). Trying to understand where these values come from (and what is running the report), I only find similar results with this query:   index=_introspection 20664 14912   and this is an example of the results (edited):   {"datetime":"08-02-2023 11:24:37.275 +0200","log_level":"INFO","component":"PerProcess","data":{"pid":"14912","ppid":"20664","status":"W","t_count":"12","mem_used":"61.352","pct_memory":"0.53","page_faults":"0","pct_cpu":"0.00","normalized_pct_cpu":"0.00","read_mb":"0.000","written_mb":"0.109","fd_used":"28","elapsed":"754858.4800","process":"splunkd","process_type":"search","search_props":{"sid":"scheduler__nobody_Q0dJLXNlYXJjaGhlYWRzLWdscGktc2VhcmNoZXM__RMD53efdbadd3a98c46d_at_1690213440_46074","user":"splunk-system-user","app":"biz-searchheads-glpi-searches","label":"glpi_states_table_lookup","provenance":"scheduler","scan_count":"0","delta_scan_count":"0","role":"head","mode":"historical","type":"scheduled"}}}   I disabled the report in both development and production but the Monitoring Console chart above keeps showing the same results. Can somebody help me understand what is going on? How to find out where the results on the Monitoring Console for that report come from? Is this from the lookup (and not the report)? Is there some hidden mechanism running the report even if it is disabled? Thanks!  
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/... See more...
Hi there ,  I created a splunk dashboard (classic) , which I want to download/export as PDF. However , I am unable to do same as trellis are not supported with PDF export. Also when I try to print/export - the dashboard widgets/panel's look gets hampered. Hence , I need help to explore the best way by which I can download dashboard view as same shown in classic view dark theme , kind of a snap view or in an image format sothat all graphs , look n feel will intact same. Also is it possible to schedule sending email about the same downloaded dashboard image.
Hi Team, how to export a dashboard (in Splunk cloud) to anonymous users.  Thanks.
02.08.2023 12:44:10.690 *INFO* [sling-threadpool-2cfa6523-0895-49ea-bb99-ae6f63c25cf6-32-Create Site from Template(aaa/jobs/abc)] bbb.CreateSiteFromSiteTemplateJobExecutor Private Site : ‘site4’ crea... See more...
02.08.2023 12:44:10.690 *INFO* [sling-threadpool-2cfa6523-0895-49ea-bb99-ae6f63c25cf6-32-Create Site from Template(aaa/jobs/abc)] bbb.CreateSiteFromSiteTemplateJobExecutor Private Site : ‘site4’ created by user : ‘admin’ with MRNumber :  ‘dr4’ I want to extract site , user and DR number and create table
I want to create a use case below is the scenario Let's suppose we have a device that will create a new temp user for every new session and deletes that user when the session is ended. Now I want... See more...
I want to create a use case below is the scenario Let's suppose we have a device that will create a new temp user for every new session and deletes that user when the session is ended. Now I want to check if a user is created but not deleted in 24 hours.  how can I achieve this absence of event?
I am performing a migration of an 8.2.2 Splunk instance into a new VM. I have copied the entire $SPLUNK_HOME (D:\Splunk) folder into the new VM machine and ran the installer. The installer fails with... See more...
I am performing a migration of an 8.2.2 Splunk instance into a new VM. I have copied the entire $SPLUNK_HOME (D:\Splunk) folder into the new VM machine and ran the installer. The installer fails with rollback. I have the logs with me and here is an excerpt of the failure: MSI (s) (DC:AC) [14:20:49:588]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI3EE8.tmp, Entrypoint: FirstTimeRunCA FirstTimeRun: Warning: Invalid property ignored: FailCA=. FirstTimeRun: Info: Properties: splunkHome: D:\Splunk. FirstTimeRun: Info: Execute first time run. FirstTimeRun: Info: Enter. Args: "D:\Splunk\bin\splunk.exe", _internal first-time-run --answer-yes --no-prompt FirstTimeRun: Info: SystemPath is: C:\Windows\system32\ FirstTimeRun: Info: Execute string: C:\Windows\system32\cmd.exe /c ""D:\Splunk\bin\splunk.exe" _internal first-time-run --answer-yes --no-prompt >> "C:\Users\********\AppData\Local\Temp\splunk.log" 2>&1" FirstTimeRun: Info: WaitForSingleObject returned : 0x0 FirstTimeRun: Info: Exit code for process : 0x2 FirstTimeRun: Info: Leave. FirstTimeRun: Error: ExecCmd failed: 0x2. FirstTimeRun: Error 0x80004005: Cannot execute first time run. CustomAction FirstTimeRun returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 14:20:57: InstallFinalize. Return value 3. Anyone has any insights?
Description of the issue: broken Defender 365 overview dashboard, whenever field status is being used root cause is SPL query has capitalized 1st character on status field (New, InProgress, Reso... See more...
Description of the issue: broken Defender 365 overview dashboard, whenever field status is being used root cause is SPL query has capitalized 1st character on status field (New, InProgress, Resolved) while the addon only ingest status (new, inProgress, resolved) without capitalized 1st letter same issue can be found in many other Dashboards As an example, the below won't return any results: `defender_atp_index` sourcetype="ms365:defender:incident:alerts" | stats latest(status) AS status latest(severity) AS severity latest(assignedTo) AS assignedTo latest(category) AS category by incidentId | chart dc(incidentId) over assignedTo by status | eval Total=New + InProgress + Resolved | fields assignedTo New InProgress Resolved Total | addcoltotals broken Defender 365 overview dashboard, because of reference to non-existing field entities{}.entityType `defender_atp_index` sourcetype="ms365:defender:incident:alerts" | stats latest(status) AS status latest(severity) AS severity latest(assignedTo) AS assignedTo latest(category) AS category latest(entities{}.entityType) AS entityType by incidentId mitre_technique_id | chart dc(mitre_technique_id) over entityType by category" Prerequisite: Installed latest Splunk Add-on for Microsoft Security Successful ingestion of below 3 sourcetypes with `Splunk Add-on for Microsoft Security`: ms:defender:atp:alerts ms365:defender:incident ms365:defender:incident:alerts Installed latest Microsoft 365 app for Splunk    
Hi All, How to find unwanted logs (noise) in crowdStrike Falcon logs? Do you know the details that can be filtered in crowdstrike falcon logs?  
In GUI > Search app > search page  I used to change the search mode to verbose which I knew it persists after the sessions. Such as, when I change the mode to "Fast" and log out. Once I log in back i... See more...
In GUI > Search app > search page  I used to change the search mode to verbose which I knew it persists after the sessions. Such as, when I change the mode to "Fast" and log out. Once I log in back it shows "Fast". But not any more after we upgraded to 9.0.5 from 9.0.4.  Why? [ui-prefs.conf, url or localStorage]
I have looked through the forums and can't find exactly what I am looking for. Here is my search and what I think should work, but I don't think I completely understand multisearch.     | mu... See more...
I have looked through the forums and can't find exactly what I am looking for. Here is my search and what I think should work, but I don't think I completely understand multisearch.     | multisearch [ search index=patch sourcetype=device host="bradley-lab" device_group=PRE* | where match(host,"bradley-lab")] [ search index=patch sourcetype=device host="bradley-lab" device_group=BFV* | where NOT match(host,"bradley-lab")] | dedup extracted_host | eval my_time=_time | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(my_time) | rename extracted_host as device_Name, my_time as "Date Posted" | table "Date Posted" device_group device_Name current_system_version latest_system_version status       host=bradley-lab will come from a token drilldown on a dashboard if the host is bradley-lab I want it to show all devices with the device_group=PRE and if the host is anything else, I want it to show all devices with device_group=BFV
Hi Team, I am using below query to get my total closing balance index="abc*" sourcetype=600000304_gg_abs_ipc2 " AssociationProcessor - compareTransformStatsData : statisticData: StatisticData" so... See more...
Hi Team, I am using below query to get my total closing balance index="abc*" sourcetype=600000304_gg_abs_ipc2 " AssociationProcessor - compareTransformStatsData : statisticData: StatisticData" source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" |rex " AssociationProcessor - compareTransformStatsData : statisticData: StatisticData totalClosingBal=(?<totalClosingBal>)"|table _time  totalClosingBal| sort _time I am getting current result as below: 7.71727634934E10 I want this E10 to be in actual numbers which can be done by below logic: 7.71727634934 × 1010 Can someone guide me how can I do this in splunk query.
Ever since we upgraded to Lookup Editor 4.0.1, any KVstore changes made in the app don't save for some reason. It says they do, but they don't. CSV files work normally.
Hi Team, I have created below query to create drill down and show raw logs but its not working for me. Can someone please help me with it. <row> <panel> <title>Association BalanceStatistics -... See more...
Hi Team, I have created below query to create drill down and show raw logs but its not working for me. Can someone please help me with it. <row> <panel> <title>Association BalanceStatistics - Send</title> <table> <search> <query>index="abc*" sourcetype=600000304_gg_abs_ipc2 " AssociationProcessor - compareTransformStatsData : statisticData: StatisticData" source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" |rex " AssociationProcessor - compareTransformStatsData : statisticData: StatisticData totalOutputRecords=(?&lt;totalOutputRecords&gt;), totalInputRecords=(?&lt;totalInputRecords&gt;),busDt=(?&lt;busDt&gt;),fileName=(?&lt;fileName&gt;),totalClosingBal=(?&lt;totalClosingBal&gt;)"|table _time totalOutputRecords totalInputRecords busDt fileName totalClosingBal|sort _time</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <set token="show_panel">true</set> <set token="selected_value1">$click.value1$</set> </drilldown> </table> </panel> </row>
Hi Team, I am using below query to show my two fields "Inputrecords" and OutputRecords" index="abc*" sourcetype = "600000304_gg_abs_ipc2" "Post ASSOCIATION" source="/amex/app/gfp-settlement-trans... See more...
Hi Team, I am using below query to show my two fields "Inputrecords" and OutputRecords" index="abc*" sourcetype = "600000304_gg_abs_ipc2" "Post ASSOCIATION" source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" | rex " Post ASSOCIATION totalInputRecordsCount=(?<totalInputRecordsCount>), totalOutputRecordsCount=(?<totalOutputRecordsCount>),nonFinChargeAccounts=(?<nonFinChargeAccounts>),finChargeAccounts=(?<finChargeAccounts>)"| table _time totalInputRecordsCount totalOutputRecordsCount I am getting the result as below: I want on clicking of Output records these two records should get displayed "nonFinChargeAccounts" and "finChargeAccounts" index="abc*" sourcetype = "600000304_gg_abs_ipc2" "Post ASSOCIATION" source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" | rex " Post ASSOCIATION totalInputRecordsCount=(?<totalInputRecordsCount>), totalOutputRecordsCount=(?<totalOutputRecordsCount>),nonFinChargeAccounts=(?<nonFinChargeAccounts>),finChargeAccounts=(?<finChargeAccounts>)"| table _time totalInputRecordsCount totalOutputRecordsCount Can someone guide me with query .
Hi Team, Currently I am using below query to show duration: index="abc*" sourcetype=600000304_gg_abs_ipc2 | rex "\[(?<thread>Thread[^\]]+)\]" | transaction thread startswith=" Started ASSOCIATIO... See more...
Hi Team, Currently I am using below query to show duration: index="abc*" sourcetype=600000304_gg_abs_ipc2 | rex "\[(?<thread>Thread[^\]]+)\]" | transaction thread startswith=" Started ASSOCIATION process for" endswith="Successfully completed ASSOCIATION process" | timechart avg(duration) as duration span=1d | eval duration=tostring(duration, "duration")|sort _time I am getting output as below: I want to see duration in minutes only  Can someone guide me.
I am trying to dig through some records and trying to get the q (query) from the raw data, but I keep getting data back that includes a backslash after the requested field (mostly as a unicode charac... See more...
I am trying to dig through some records and trying to get the q (query) from the raw data, but I keep getting data back that includes a backslash after the requested field (mostly as a unicode character representation, /u0026 which is an &). For example, I have this search query to capture the page from which a search is being made (i.e., "location"):    index="xxxx-data" | regex query="location=([a-zA-Z0-9_]+)+[^&]+" | rex field=_raw "location=(?<location>[a-zA-Z0-9%-]+).*" | rex field=_raw "q=(?<q>[a-zA-Z0-9%-_&+/]+).*"| table location,q   Which mostly works viewing the Statistics tab, except that it occasionally returns the next URL parameter, i.e., location q home_page   hello+world   // this is ok about_page goodbye+cruel+world\u0026anotherparam=anotherval    // not ok  The second result should just be goodbye+cruel+world without the following parameter. I have tried adding variations on regex NOT [^\\] for a backslash character but everything I've tried has either resulted in an error of the final bracket being escaped, or the backslash character ignored like so: rex field=_raw  ... regex attempt result "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\\]).*"  goodbye+cruel+world\u0026param=val   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\]).*"  Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*': Regex: missing terminating ] for character class.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*"  Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\]).*': Regex: missing terminating ] for character class.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^\\u0026]).*" Error in 'rex' command: Encountered the following error while compiling the regex 'q=(?<q>[a-zA-Z0-9%-_&+/]+[^\u0026]).*': Regex: PCRE does not support \L, \l, \N{name}, \U, or \u.   "q=(?<q>[a-zA-Z0-9%-_&+/]+[^u0026]).*"  goodbye+cruel+world\u0026param=val" "q=(?<q>[a-zA-Z0-9%-_&+/]+[^&]).*"  goodbye+cruel+world\u0026param=val" "q=(?<q>[a-zA-Z0-9%-_&+/]+).*" goodbye+cruel+world\u0026param=val     Events tab data is like:    Event apple: honeycrisp ball: baseball car: Ferrari query: param1=val1&param2=val2&param3=val3&q=goodbye+cruel+world&param=val status: 200   ... etc ... SO, how can I get the q value to return just the first parameter, ignoring anything that has a \ or & before it and terminating just at q? And please, if you would be so kind, include an explanation of why what you suggest works?  Thanks
I have need of creating a dashboard that will compare 2 sets of data from different times. Thus, I need to bypass the time picker. I realize that I may do this by including an earliest=x latest=y sta... See more...
I have need of creating a dashboard that will compare 2 sets of data from different times. Thus, I need to bypass the time picker. I realize that I may do this by including an earliest=x latest=y statement in my search. What I am trying to do, is combine an absolute date with a relative statement.  The reason is that the absolute date in each of the two charts needs to be a variable from a button on the dashboard, in the example I am trying to build, this is a deployment date. I want to search x number (another variable) of days before and after that deployment date.  ex, index=my_index source=my_source earliest=07/19/2023:00:00:00 latest=07/19/2023:23:59:59 In this example the deployment date is the 19th of July.  How I would I write this to be 07/19/2023:23:59:59 +/- x days so that I can make both the Absolute date itself, and the number of days a variable tied to dropdown buttons in the dashboard. 
Hi All, I have got logs as below:   Log1: Tue Aug 1 12:15:03 EDT 2023 10G 6.4G 64% /var Log2: Tue Aug 1 12:15:03 EDT 2023 20G 5.9G 30% /opt Log3: Tue Aug 1 12:15:02 EDT 2023 11G 7.2G 66% /ua... See more...
Hi All, I have got logs as below:   Log1: Tue Aug 1 12:15:03 EDT 2023 10G 6.4G 64% /var Log2: Tue Aug 1 12:15:03 EDT 2023 20G 5.9G 30% /opt Log3: Tue Aug 1 12:15:02 EDT 2023 11G 7.2G 66% /uam Log4: Tue Aug 1 12:15:02 EDT 2023 11G 7.2G 85% /mqr   Using below query, I created a pie chart for my dashboard:   **** | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | lookup Environment_List.csv "host" | search Environment="UAT" | eval UAT=if(Disk_Usage <= 79, "Below80%", "Above80%") | stats count by UAT   I have 3 other Environments (SIIT,DIT,DIT2), for which I created pie charts using above query and changing the environment name. Now, I have got 4 pie charts in 4 separate panels in the dashboard. I need to get all the 4 pie charts in one panel and want to create drilldown from that panel. (something like shown in the attachment) Please help to modify the query to get all the pie charts in one panel in the dashboard.    Your kind consideration is highly appreciated..!! Thank You..!!