All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Cloud you give me some tips. Search Query S1 index=S1 | bla bla bla | stats value(dstIP) value(dstPort) value(srcIP) value(srcUser) by URL In S1's results, I want to add "sum(B... See more...
Hello, Cloud you give me some tips. Search Query S1 index=S1 | bla bla bla | stats value(dstIP) value(dstPort) value(srcIP) value(srcUser) by URL In S1's results, I want to add "sum(Byte)", Byte is in S2. Also S2's log has dstIP, dstPort and Byte. From comparing S1_dstIP with S2_dstIP, I want to add "sum(Byte)" on the search result.
Hello, I am trying to get around the inefficiency of the transaction command by using stats. My goal is to correlate user sessions and find the average duration of the session over given time fram... See more...
Hello, I am trying to get around the inefficiency of the transaction command by using stats. My goal is to correlate user sessions and find the average duration of the session over given time frames (I suspect a dip may indicate a problem). The search I have so far appears to only work on hour intervals, is there any way to get it to parse on tighter spans? I couldn't seem to find a way to preserve _time for use with the timechart. My search is as follows: foo | rename user_id as user | eval dHour=strftime(_time, "%D %H") | stats count as eventCount earliest(_time) as earliestTime latest(_time) as latestTime by dHour user | eval duration=latestTime - earliestTime | stats avg(duration) as avgDuration sum(eventCount) as numEvents avg(eventCount) as avgEventsPerSession count(user) as numberOfUsers by dHour | where avgDuration<1600 AND numEvents>30000 AND avgEventsPerSession<100 AND numberOfUsers>12000
Since performing a recent upgrade, SPlunk is constantly reporting (in Health Status) that the Searches Delayed is above threshold. E.g.The percentage of non high priority searches delayed (23%) ov... See more...
Since performing a recent upgrade, SPlunk is constantly reporting (in Health Status) that the Searches Delayed is above threshold. E.g.The percentage of non high priority searches delayed (23%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=21. Total delayed Searches=5 In order to troubleshoot this, I need to understand what Splunk is calling a Delayed Search. There does not seem to be any information in docs about the PeriodicHealthReporter and what it is using in relation to the data behind the results.
two time fields per event: _time (default eventfield for Splunk) occurtime (timestamp within body of event) I only want to show events where the field in the body of the event: "occurtime... See more...
two time fields per event: _time (default eventfield for Splunk) occurtime (timestamp within body of event) I only want to show events where the field in the body of the event: "occurtime" is not more than two days older than "_time". I have done the following to convert occurtime to epoch time: eval occur=strptime(occurtime,"%Y-%m-%dT%H:%M:%S") example current output: _time: 2020-04-23 05:07:03.151 occurtime: 2020-02-24T17:42:38.572Z occur: 1582594958.000000 I just need to figure out how the < funcitons with time. Thank you!
Let's assume 1st drop down values are 1.workplace , 2. Sales , 3. Marketing and 4 . Production. 2nd drop down values depend upon the locations. Case -1 :If I select India - show only Sales & Ma... See more...
Let's assume 1st drop down values are 1.workplace , 2. Sales , 3. Marketing and 4 . Production. 2nd drop down values depend upon the locations. Case -1 :If I select India - show only Sales & Marketing USA - show only Production Argentina - show only workplace Case 2: Same way I have 4 panel with some query. If I select workplace, it should show Workplace panel with hiding all other panel. Case3: Location as India and 2nd drop down as a sales . It should only Sales panel. How can I get this, please help me out. Thanks in Advance.
We have an on-prem Splunk instance (was 7.0.3, have now upgraded to 8.0.4 but are still seeing the same behaviour). When I try to index files, or DBX connections, the file indexer correctly report... See more...
We have an on-prem Splunk instance (was 7.0.3, have now upgraded to 8.0.4 but are still seeing the same behaviour). When I try to index files, or DBX connections, the file indexer correctly reports the number of matching files in the directory, but the Index shows 0 events. I have also tried indexing databases using DBConnect, which again, shows results during initial testing and configuration, but after setup, the index remains with 0 events in it. The $SPLUNK_HOME/var/log/splunk/splunk_app_db_connect_server.log file shows this: 2020-04-24 04:15:27.525 +0000 [QuartzScheduler_Worker-25] INFO org.easybatch.core.job.BatchJob - Job '<JOBNAME>' starting 2020-04-24 04:15:27.525 +0000 [QuartzScheduler_Worker-25] INFO org.easybatch.core.job.BatchJob - Batch size: 1,000 2020-04-24 04:15:27.525 +0000 [QuartzScheduler_Worker-25] INFO org.easybatch.core.job.BatchJob - Error threshold: N/A 2020-04-24 04:15:27.525 +0000 [QuartzScheduler_Worker-25] INFO org.easybatch.core.job.BatchJob - Jmx monitoring: false 2020-04-24 04:15:27.626 +0000 [QuartzScheduler_Worker-25] INFO c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=db_input_record_reader_is_opened task=<JOBNAME> query=SELECT * FROM "<DATABASE>"."dbo"."<TABLE>" 2020-04-24 04:15:27.726 +0000 [QuartzScheduler_Worker-25] INFO org.easybatch.core.job.BatchJob - Job '<JOBNAME>' started 2020-04-24 04:15:27.776 +0000 [QuartzScheduler_Worker-25] INFO c.s.dbx.server.dbinput.recordwriter.HecEventWriter - action=write_records batch_size=50 2020-04-24 04:15:27.776 +0000 [QuartzScheduler_Worker-25] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector 2020-04-24 04:15:27.776 +0000 [QuartzScheduler_Worker-25] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector record_count=50 2020-04-24 04:15:27.778 +0000 [QuartzScheduler_Worker-25] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection? ******<snip>******* 2020-04-24 04:15:27.778 +0000 [QuartzScheduler_Worker-25] ERROR org.easybatch.core.job.BatchJob - Unable to write records javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection? ******<snip>******* 2020-04-24 04:15:27.778 +0000 [QuartzScheduler_Worker-25] INFO org.easybatch.core.job.BatchJob - Job '<JOBNAME>' finished with status: FAILED I've turned off SSL checkboxes, so assume it's a mismatch on port expectations based on some other googling, and can confirm that a SPL of: | dbxquery query="SELECT TOP 10 * FROM \"<DATABASE>\".\"dbo\".\"<TABLE>\"" connection="<CONNECTION>" returns results, just like the DB Connect configuration does. I'm really struggling to discover any reason why my indexes aren't being populated, and would really appreciate any help. P
as you can see the heatmaps shows only 10 rows. how to increase this? i would like to use the heatmap to show all 100 servers we have.
We have an idea to use the logs from these systems for DDOS detections. Was wondering if anyone has props\transfers that will parse\normalize\model them?
When creating the local/props.conf and local/transforms.conf, do I need to copy the entire default/props.conf and default/transforms.conf files into local/ or do I simply start with blank files and a... See more...
When creating the local/props.conf and local/transforms.conf, do I need to copy the entire default/props.conf and default/transforms.conf files into local/ or do I simply start with blank files and add the sections I need?
Is it possible to use Splunk Connect for Zoom in a managed Splunk Cloud environment without an on-prem Heavy Forwarder? As far as I'm aware, Zoom only supports webhook-based logging which isn't compa... See more...
Is it possible to use Splunk Connect for Zoom in a managed Splunk Cloud environment without an on-prem Heavy Forwarder? As far as I'm aware, Zoom only supports webhook-based logging which isn't compatible with Splunk Cloud (for some reason). Using a Heavy Forwarder isn't an option but open to other workarounds if any exist. Scenario is: - Running a managed Splunk Cloud instance on version 7.2.9 - Running an Inputs Data Manager (IDM) instance on version 7.2.9 - No heavy forwarder
I am having trouble forwarding data to Splunk cloud from a Windows host. Previously, Linux deployments gave no issues. Seeing the following error, which I have researched and have not come to any c... See more...
I am having trouble forwarding data to Splunk cloud from a Windows host. Previously, Linux deployments gave no issues. Seeing the following error, which I have researched and have not come to any conclusions. TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead The cert path itself looks good until further down the splunkd.log where the unix convention of forward slashes causes another error. ERROR SSLCommon - Can't read certificate file C:\Program Files\SplunkUniversalForwarder\etc/apps/app/default/app_server.pem errno=33558530 error:02001d23562:system library:fopen:No such file or directory Would like to know ways this has been solved before. Thank you
Hi, I have a data model trained outside of Splunk using the K-means algorithm and sampled data-set. I haven't tried the built-in algorithms that come with the MLTK app in Splunk yet, as I am mo... See more...
Hi, I have a data model trained outside of Splunk using the K-means algorithm and sampled data-set. I haven't tried the built-in algorithms that come with the MLTK app in Splunk yet, as I am more familiar with using python scripts on my local computer for ML. Is there a way to import a trained model into Splunk which can then be used to "apply" on new data? We have Splunk Enterprise Version: 7.3.3. Any help would be appreciated.
I’m testing the Splunk App for Nextcloud. I installed a Splunk enterprise server, and a Splunk universal forwarder (my Nextcloud instance and the server are on different hosts). Looks like it’s ... See more...
I’m testing the Splunk App for Nextcloud. I installed a Splunk enterprise server, and a Splunk universal forwarder (my Nextcloud instance and the server are on different hosts). Looks like it’s working, and I do collect data from my Nextcloud instance, however not all categories of data. Shortlist of what IS collected: Successful and failed logins Number of files and folders operations Files and folders activity Most of the “security” data Shortlist of what is NOT being retrieved (and not displayed in the Splunk web pages), mainly some usage data: Users (active and defined) Shares and storage (number of files, free disk space) Hardware A few other types of data I would welcome ideas about what is left to configure, or what I’m doing wrong with the setup. Thanks in advance! Jean-Claude
I am installing the trial version of Splunk Enterprise on Windows 10 pro 64bit. When I use a domain account the installation fails. But when I use a local account the installation succeeds. Do th... See more...
I am installing the trial version of Splunk Enterprise on Windows 10 pro 64bit. When I use a domain account the installation fails. But when I use a local account the installation succeeds. Do the trial versions of Splunk Enterprise and Universal Forwarder support domain accounts?
Hi there, Really basic question but I can't find a detailed answer. Can someone explain the different uses of (), [], and example in the search app? Thanks in advance.
I have the Palo Alto Networks and and the Palo Alto Networks Add-on apps installed and the logs are being ingested but nothing appears on the dashboards. Is this likely due to using a custom index?
I want splunk to reach out to a few goofy devices on my network and grab JSON responses. Is this possible? can I get a few examples? So to be clear i would like splunk to poll (reach o... See more...
I want splunk to reach out to a few goofy devices on my network and grab JSON responses. Is this possible? can I get a few examples? So to be clear i would like splunk to poll (reach out) say http://dummy.restapiexample.com/api/v1/employees every 10 seconds, this rest API with json response, and log this in an index so i can do my thing in splunk with the data.
Hello, I have a situation where I need to check if a time field, 'report_date' in format "%Y-%m-%d %H:%M:%S" happened between 7 AM and 4 PM of that same day. I cant figure out how to do that compa... See more...
Hello, I have a situation where I need to check if a time field, 'report_date' in format "%Y-%m-%d %H:%M:%S" happened between 7 AM and 4 PM of that same day. I cant figure out how to do that comparison. I dont know how to get the hour value from my report_date field. I'm trying to do that so I can make a filter to see how many reports were made in a specific period of the day so I can tell which shift recieved the report (the recieving time is not the same as the event time in splunk in that particular scenario), and I need to filter by shift. So far what I did: index=raw_maximo INCIDENTE=I* GR_RESP="OPERACAO" | eval shift1=strptime(report_date,"%Y-%m-%d %H:%M:%S") | where shift1 >= "07:00:00" AND shift1 <"16:00:00" (SOMETHING HAS TO BE CHANGED HERE, I'm comparing time with string atm) |stats count(INCIDENTE) (I dont really remember what goes here, but not relevant, is just a count...)
I have a token I want to set up when I first init dashboard: [stats count | eval search=strftime(now(), "mysearch%y%m%d%H%M%S.csv")] But this gets interpreted dynamically throughout changing the na... See more...
I have a token I want to set up when I first init dashboard: [stats count | eval search=strftime(now(), "mysearch%y%m%d%H%M%S.csv")] But this gets interpreted dynamically throughout changing the name of the file. I just want to have a timestamp literal I can reuse. Been at it for a while using fieldformat, print, etc. Thanks!
Can someone help me with setting up a search that could pull version information from a Windows Defender event 1151 log? I have the TA for Windows Defender app installed and I believe that the d... See more...
Can someone help me with setting up a search that could pull version information from a Windows Defender event 1151 log? I have the TA for Windows Defender app installed and I believe that the data is being parsed correctly. I am fairly new in getting started with Splunk and have been unable to determine what type of search syntax does what I want it to do.