All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to be able to able to count the number of events and the median length of events per sourcetype in Splunk ? I'm trying to figure out the average/median size of evets for each sourcetype. ... See more...
I want to be able to able to count the number of events and the median length of events per sourcetype in Splunk ? I'm trying to figure out the average/median size of evets for each sourcetype. By size, I mean the charachter length of the raw events.  and then multiply the count of events with the median size to get an idea of what sourcetypes contain big events , so that I can use the data for event size reduction if that is possible.
I have a query like this: | dbxquery connection=xxxxx  query="select xxx FROM xxx WHERE xxx and to_char(LOG_DATE_TIME,'YYYY-MM-DD')='2022-10-13'" | iplocation SRC_IP | stats values(LOG_DATE_TIME) ... See more...
I have a query like this: | dbxquery connection=xxxxx  query="select xxx FROM xxx WHERE xxx and to_char(LOG_DATE_TIME,'YYYY-MM-DD')='2022-10-13'" | iplocation SRC_IP | stats values(LOG_DATE_TIME) as TIME dc(City) as countCity list(City) as city values(SRC_IP) as sourceIp by CIF USER_CD | eval time = strptime(TIME,"%Y-%m-%d %H:%M:%S.%3N") | eval differenceMinutes=(max(time)-min(time))/60 | fields - time | search countCity>1 AND differenceHours>1   The query displays the result like this:   I want it to have a result like this: Example: Jakarta - Bogor (LOG_DATE_TIME  - string type data)                  2022-10-13 09:03:33.539    -    2022-10-13 09:00:55.885 (already converted in timestamps version)      1665626613.539000      -       1665626455.885000                                                                                              =  158 (in seconds)                                                                                              then 158/60 = 2.633 minutes Bogor - Jakarta =  9.22 minutes Jakarta - Bogor = 360 minutes Bogor - Jakarta = 240 minutes   How should my query be, in order to achieve that result?
is there a way to configure the account for the add on TA-jira_issue_input via configuration file rather
I want to input into splunk the "events" of my fire alarms of all the branch offices. Is there a way I can manually create an index=firealarm and periodically fill fields I will create such as: dat... See more...
I want to input into splunk the "events" of my fire alarms of all the branch offices. Is there a way I can manually create an index=firealarm and periodically fill fields I will create such as: date: 26 october branch: 01 alarmid: 125 reason: smoking etc... I will add new events everytime an alarm is triggered. I know I can do this in excel, but I want to store these data on Splunk and build dashboards too
Hello I am having the following query:  index=*  "There was an error trying to process" | table _raw logs _raw 1 2022-10-25 22:10:59.937 ERROR 1 --- [rTaskExecutor-1] c.s.s.service.... See more...
Hello I am having the following query:  index=*  "There was an error trying to process" | table _raw logs _raw 1 2022-10-25 22:10:59.937 ERROR 1 --- [rTaskExecutor-1] c.s.s.service.InboundProcessingFlow : There was an error trying to process PPositivePay121140399F102520220942.20221025094304862.ach from Inbox. 2 2022-10-25 22:10:57.824 ERROR 1 --- [rTaskExecutor-1] c.s.s.service.InboundProcessingFlow : There was an error trying to process FPositivePay121140399Q102420222215.20221024221617018.ach from Inbox. 3 2022-10-25 22:10:57.824 ERROR 1 --- [rTaskExecutor-2] c.s.s.service.InboundProcessingFlow : There was an error trying to process FPositivePay121140399W102520220113.20221025011346442.ach from Inbox. 4 2022-10-25 22:11:53.729 ERROR 1 --- [rTaskExecutor-2] c.s.s.service.InboundProcessingFlow : There was an error trying to process PPositivePay121140399Q102420222215.20221024221617018.ach from Inbox. I would need to alter the search query so that the output is becoming:  Time                             file_name 2022-10-25 15:10:49 PPositivePay121140399F102520220942.20221025094304862.ach 2022-10-25 15:10:59 FPositivePay121140399Q102420222215.20221024221617018.ach 2022-10-25 15:11:09 FPositivePay121140399W102520220113.20221025011346442.ach 2022-10-25 15:11:14 PPositivePay121140399Q102420222215.20221024221617018.ach   Thanks @gcusello 
I would like a non Splunk Cloud admin user to be able to configure the inputs of the add-on: Splunk Add-on for Java Management Extensions. Currently, he got an 403 error when he goes to /app/Splunk_T... See more...
I would like a non Splunk Cloud admin user to be able to configure the inputs of the add-on: Splunk Add-on for Java Management Extensions. Currently, he got an 403 error when he goes to /app/Splunk_TA_jmx/configuration. What capabilities is required to configure this add-on? Thank you.   
Title may be a bit confusing, so here's an example of what I'm trying to achieve: I want to convert a table that looks like this: _time user action 2022-01-01 10:00:00 user_1 login 202... See more...
Title may be a bit confusing, so here's an example of what I'm trying to achieve: I want to convert a table that looks like this: _time user action 2022-01-01 10:00:00 user_1 login 2022-01-01 10:00:10 user_2 login 2022-01-01 11:30:20 user_1 logout 2022-01-01 11:40:00 user_1 login 2022-01-01 12:00:00 user_1 logout 2022-01-01 12:01:00 user_2 logout   Into this: user login_time logout_time user_1 2022-01-01 10:00:00 2022-01-01 11:30:20 user_2 2022-01-01 10:00:10 2022-01-01 12:01:00 user_1 2022-01-01 11:40:00 2022-01-01 12:00:00  
I'm trying to redact the description field from the Service WinHostMon to have something like that: Before:       Type=Service Name="LoremIpsum" DisplayName="Lipsum service" Description="Bl... See more...
I'm trying to redact the description field from the Service WinHostMon to have something like that: Before:       Type=Service Name="LoremIpsum" DisplayName="Lipsum service" Description="Bla bla bla bla bla." Path="C:\path\to\software.exe" ServiceType="Unknown" StartMode="Manual" Started=false State="Stopped" Status="OK" ProcessId=123         After:       Type=Service Name="LoremIpsum" DisplayName="Lipsum service" Description="redacted" Path="C:\path\to\software.exe" ServiceType="Unknown" StartMode="Manual" Started=false State="Stopped" Status="OK" ProcessId=123         I have a Windows host running Splunk UF, which then sends the data to a Splunk HF, which then sends it to Splunk Cloud. In the Splunk HF I already tried 2 approaches, both failed:   Approach 1: Splunk HF > system/local/props.conf       [source::service] SEDCMD-redact=s/\/Description=.+\n/\/Description="redacted"\n/g         Approach 2: Splunk HF > system/local/props.conf       [source::service] TRANSFORMS-my_transf = remove-desc         Splunk HF > system/local/transforms.conf       [remove-desc] REGEX = (?mi)((?:.|\n)+Description=).+(\n(?:.|\n)+) FORMAT = $1"redacted"$2 DEST_KEY = _raw         So, how can I redact the description field?
I am having a brain fart on trying to figure out how to find the total bytes per application and the the percent of each app by total bytes. For example: app bytes in GB percentage SSL 300... See more...
I am having a brain fart on trying to figure out how to find the total bytes per application and the the percent of each app by total bytes. For example: app bytes in GB percentage SSL 300GB 23% DNS 100GB 13% etc etc etc   Current search is this:   index=foo | eventstats sum(bytes) as total_bytes | stats sum(bytes) as total first(total_bytes) as total_bytes by app | eval CompliancePct=round(total/total_bytes,2)   Any help would be appreciated
Hello,  I am creating some reports to measure the uptime of hardware we have deployed, and I need a way to filter out multiple date/time ranges the match up with maintenance windows.  We are utilizin... See more...
Hello,  I am creating some reports to measure the uptime of hardware we have deployed, and I need a way to filter out multiple date/time ranges the match up with maintenance windows.  We are utilizing a Data Model and tstats as the logs span a year or more.   The (truncated) data I have is formatted as so: time range: Oct. 3rd - Oct 7th. |tstats summariesonly=true allow_old_summaries=true count(device.status) as count from datamodel=Devices.device where device.status!="" AND device.customer="*" AND device.device_id ="*" by device.customer, device.device_id, device.name, device.status _time device.customer device.device_id device.name device.status _time count ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-04 314 ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-05 782 ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-06 749 ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-07 1080 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-04 510 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-05 658 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-06 691 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-07 360 ppt webOS-205AZXCA8162 Sao Paulo Office warning 2022-10-04 1 ppt webOS-205AZXCA8162 Sao Paulo Office warning 2022-10-06 2 ppt webOS-205AZXCA8162 Sao Paulo Office warning 2022-10-07 1   As the reports will be run by other teams ad hoc, I was attempting to use a 'blacklist' lookup table to allow them to add the devices, time ranges, or device AND time range they wish to exclude from the results.  That lookup table is formatted as such: type start end deviceID note time 2022-10-03T13:10:30.000-04:00 2022-10-04T14:10:30.000-04:00   test range 10-04-2022 1:30 through 2:10 in EST UTC-4 device     12345   timedevice 2022-10-04T13:10:30.000-04:00 2022-10-05T14:10:30.000-04:00 webOS-205AZXCA8162   time 2022-10-06T13:10:30.000-04:00 2022-10-06T14:10:30.000-04:00   test range 10-06-2022 1:30 through 2:10 in EST UTC-4 device     webOS-205AZXCA8122     In my head, this works as a report they run on the total timeframe they wish to analyze, and then the devices, timeframes, and timeframe/device events are removed as entered on the lookup table. My biggest hang up right now is finding a way to exclude the unknown quantity of time or timedevice blacklist entries from the total list of results.    Thank you for any help you can provide!
I looked around quite a bit and could not find specificly what I am looking for.   I have a user that can create dashboards and reports, however upon creation or existing reports and dashboards h... See more...
I looked around quite a bit and could not find specificly what I am looking for.   I have a user that can create dashboards and reports, however upon creation or existing reports and dashboards he doesn't have the option to change perms like: private to  global or private -app- all apps  what capability in splunk roles have this options ?  
We are receiving logs from imap before but it suddenly stops indexing data. No recent changes was made on our end. Our architecture is Imap > Imap Mailbox > UF > Splunk Indexer. How can receive email... See more...
We are receiving logs from imap before but it suddenly stops indexing data. No recent changes was made on our end. Our architecture is Imap > Imap Mailbox > UF > Splunk Indexer. How can receive email again? We've checked the mailbox and found delivered emails. How can we also know the server name/host name for imap server? What are the files/permissions needs to check? Please help.
Scenario/Requirements: We have one eStreamer reporting from Firepower Management Console (FMC#1) to our Heavy Forwarder (HF#1) at HQ in Domain#1 We have another eStreamer reporting from FMC#2 to o... See more...
Scenario/Requirements: We have one eStreamer reporting from Firepower Management Console (FMC#1) to our Heavy Forwarder (HF#1) at HQ in Domain#1 We have another eStreamer reporting from FMC#2 to our HF#2 in another location in Domain#2. We want to redirect FMC#2 in Domain#2 to send eStreamer reporting to the HF#1 in Domain#1. Have each eStreamer instance sending to two separate indexes with each instance running at a different time.   If I understand the documentation correctly, I cannot run two instances of eStreamer at the same time - and have to schedule them at separate times. - How do I accomplish this? Also, I have been under the impression that I need to clone the TA-estreamer add-on to a different directory, and then update the indexes.conf and inputs.conf - but not sure on what else I would need to change. I would appreciate any help to get this working based on the scenario/requirements.
Hi Community,   We have a cluster setup for our Splunk install where all the data are indexed at the data layer (data from heavy forwarders, indexers, and even the _internal data from the search ... See more...
Hi Community,   We have a cluster setup for our Splunk install where all the data are indexed at the data layer (data from heavy forwarders, indexers, and even the _internal data from the search head). The current size of indexes in the Splunk Search head should be 1 MB but I notice that one of the indexes and a few internal indexes get data. This increases the size of the search head in addition to the increase in the size of the indexes in the indexers. When I check the last event received by the indexer in the search head, it shows 8 months ago in the GUI and also in the backend files. But when I check the same in any of the indexers, the last event was received recently. My doubt is can I delete the DB data files in the search head or are there some steps that I need to follow before I remove the DB files directly?   Regards, Pravin
I have a text box in a splunk dashboard and I'm trying to find out how I can separate values entered into the text box that are separated by commas with a OR clause. for example: values entered i... See more...
I have a text box in a splunk dashboard and I'm trying to find out how I can separate values entered into the text box that are separated by commas with a OR clause. for example: values entered into text box: 102.99.99, 103.99.93, 203.23.21 Where this search (index=abc sourcetype=abc src_ip="$ip$") would translate to:  index=abc sourcetype=abc src_ip="102.99.99 OR 103.99.93 OR 203.23.21" Any suggestions?  
I want to test if my ITSI kpi's are working as expected, im creating fake events, with collect, that should trigger the kpi. However the existing onboarding onboards data that avoids the trigger of t... See more...
I want to test if my ITSI kpi's are working as expected, im creating fake events, with collect, that should trigger the kpi. However the existing onboarding onboards data that avoids the trigger of the correct kpi state e.g. I onboard my data at minute 1, minute 2 the real data is onboarded, minute 5 the kpi base search runs. it takes the last state and sees everything is correct. How can i disable all data onboarding in an easy way for running tests?
I am getting fewer events when using rename command in splunk. ( Compared to the search where I haven't used rename). What could be the reason behind this?
Hello everyone! I am working on test environment where I only have one Splunk instance. I edited on the journal.zst file in one of my buckets (I have a buckup) just to test the data integrity, so n... See more...
Hello everyone! I am working on test environment where I only have one Splunk instance. I edited on the journal.zst file in one of my buckets (I have a buckup) just to test the data integrity, so now it's corrupted. my question is, is there a way to not lose all the events in this bucket ? I tried fsck and rebuild but the bucket is still corrupted with data integrity check being unsuccessful which is normal. I'm not sure if it's possible in the real world to face such an issue, but m curious to know what would the best strategy be. any help would be appreciated  
How many duplicated events we have? Percent of duplicated events? Difference between duplicated and unique events.?
Inter join is not displaying any results.   the search works however, nothing is showing up on the screen index = tenable | rename hostnames as host.name | table host.name | join type=inner host.na... See more...
Inter join is not displaying any results.   the search works however, nothing is showing up on the screen index = tenable | rename hostnames as host.name | table host.name | join type=inner host.name [search (index=assetpanda) | fields host.name] | table host.name