All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all. I wish to display in a table format the value's count. For example; Computer A has 100 sessions. Computer B has 50 sessions. I want to display the 100 and and the 50 values alongsid... See more...
Hi all. I wish to display in a table format the value's count. For example; Computer A has 100 sessions. Computer B has 50 sessions. I want to display the 100 and and the 50 values alongside "Computer A" and "Computer B".   Thanks!
I found this, but I am unable to replicate it. I am not understanding where I am messing up here. Problem: I feed btool outputs into splunk, and chop them up by stanza.   /opt/splunk/etc/ap... See more...
I found this, but I am unable to replicate it. I am not understanding where I am messing up here. Problem: I feed btool outputs into splunk, and chop them up by stanza.   /opt/splunk/etc/apps/Splunk_TA_windows/default/transforms.conf [xmlsecurity_eventcode_errorcode_action_lookup] /opt/splunk/etc/system/default/transforms.conf CAN_OPTIMIZE = True /opt/splunk/etc/system/default/transforms.conf CLEAN_KEYS = True /opt/splunk/etc/system/default/transforms.conf DEFAULT_VALUE = /opt/splunk/etc/system/default/transforms.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/transforms.conf DEST_KEY = /opt/splunk/etc/system/default/transforms.conf FORMAT = /opt/splunk/etc/system/default/transforms.conf KEEP_EMPTY_VALS = False /opt/splunk/etc/system/default/transforms.conf LOOKAHEAD = 4096 /opt/splunk/etc/system/default/transforms.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/transforms.conf MV_ADD = False /opt/splunk/etc/system/default/transforms.conf REGEX = /opt/splunk/etc/system/default/transforms.conf SOURCE_KEY = _raw /opt/splunk/etc/system/default/transforms.conf WRITE_META = False /opt/splunk/etc/apps/Splunk_TA_windows/default/transforms.conf case_sensitive_match = false /opt/splunk/etc/apps/Splunk_TA_windows/default/transforms.conf filename = xmlsecurity_eventcode_errorcode_action.csv   I then wanted to extract the fields, for example "SOURCE_KEY = _raw" should be my key/value pair! I hoped to accomplish this with  (transforms)   [dotheparsething] REGEX = \s([\S-]+)\s=\s([^\/\n]+) LOOKAHEAD = 100000 FORMAT = $1::$2 REPEAT_MATCH = true   (props)   [(?::){0}splunk:config:btool:*] TRUNCATE=10000 MAX_EVENTS=10000 KV_MODE = none BREAK_ONLY_BEFORE = conf[\s]+\[ #SEDCMD-removespaces = s/\ +/\ /g REPORT-dotheparsething = dotheparsething    But I am getting nothing! Regex101 seems happy with my search.
It possible if i would like to change color High to Red Medium to Yellow Low to Green   If it possible to change color. Please help to recommend.     Best Regards, CR
Hi everyone, Just installed App for Postgres on my SearchHead and required Add-on for Postgres on db server. Is there any configuration instruction so I can configure the connection? I have been sea... See more...
Hi everyone, Just installed App for Postgres on my SearchHead and required Add-on for Postgres on db server. Is there any configuration instruction so I can configure the connection? I have been searching Internet, YouTube and did not find any docs. thanks for help, pawelF
Hello Team, I have used to ask the same question in my previous ask : https://community.splunk.com/t5/Splunk-Search/How-to-write-a-search-to-compare-two-weeks-errors-and-highlight/m-p/617827#M2147... See more...
Hello Team, I have used to ask the same question in my previous ask : https://community.splunk.com/t5/Splunk-Search/How-to-write-a-search-to-compare-two-weeks-errors-and-highlight/m-p/617827#M214708 I am not having the correct results while using the suggested workaround in SPL. So I have modified my SPL as below. Here my release is for 14 days and I need to compare the events with "Current_release_error" & "Last_release_error". If any new error only present in current release then I want to call out those results. Pease suggest some value workarounds.   index="ABC" source="/abc.log" ("ERROR" OR "EXCEPTION") earliest=-14d latest=now() | rex "Error\s(?<Message>.+)MulesoftAdyenNotification" | rex "fetchSeoContent\(\)\s(?<Exception>.+)" | rex "Error:(?<Error2>.+)" | rex "(?<ErrorM>Error in template script)+" | rex "(?ms)^(?:[^\\|\\n]*\\|){3}(?P<Component>[^\\|]+)" | rex "service=(?<Service>[A-Za-z._]+)" | rex "Sites-(?<Country>[A-Z]{2})" | eval Error_Exception= coalesce(Message,Error2,Exception,ErrorM) | eval Week=case(_time<relative_time(now(),"-14d@d"),"Current_release_error",_time>relative_time(now(),"-28d@d-14d@d"),"Last_release_error") | stats dc(Week) AS Week_count values(Week) AS Week BY Error_Exception | eval Week=if(Week_count=2,"Present in Previous Release",Week) | where Week_count=1    
Hello everyone, I have the below search: index=flexcube [|inputlookup AUTHs.csv | fields + role_id ] [|inputlookup function_ids.csv | rename C_FUNCTION_ID as role_function | fields + role... See more...
Hello everyone, I have the below search: index=flexcube [|inputlookup AUTHs.csv | fields + role_id ] [|inputlookup function_ids.csv | rename C_FUNCTION_ID as role_function | fields + role_function] | rename role_function as function_id | chart latest(control_1) as NEW, latest(control_8) as AUTH over function_id by role_id limit=0 and it returns the following table: function_id              AUTH: A      AUTH: B       AUTH: C      AUTH: D 1 ACDCBIRD                                         0                     1 2 CADAMBLK               1                                                                     0 3 CLDACAUT                1                      0                      0                    0 4 CLDACCNT                0                        1                  1                        1              ...etc. I want to create an alert that catches only when a value changes from blank to 0 or 1, or vice versa. Thanks in advance.
Hi, After I ticked "Enable Indexer acknowledgement" in "HTTP Event Collection" -> "Auto Generated ITSI Event Management Token", I no longer have notable events generated. And I saw  "Data channel is... See more...
Hi, After I ticked "Enable Indexer acknowledgement" in "HTTP Event Collection" -> "Auto Generated ITSI Event Management Token", I no longer have notable events generated. And I saw  "Data channel is missing" errors in _internal index.   After some research, I understood from https://docs.splunk.com/Documentation/Splunk/8.2.7/Data/AboutHECIDXAck that HEC sender must include a channel identifier. But how do I configure ITSI so that it include channel identifier when it is generating notable events? Thank you very much.  
Hi, Log format is JSON I have a Field named Organization Now when Organization = "Systèmes" , this will have the following consequences -- When doing a search with Organization = "Systèmes"... See more...
Hi, Log format is JSON I have a Field named Organization Now when Organization = "Systèmes" , this will have the following consequences -- When doing a search with Organization = "Systèmes" (and doing e.g. a table output), I get no results When doing a search with Organization = Syst* (and doing e.g. a table output), I get results -- I am wondering why Splunk would not recognize this è in the search ... I read different topics where CHARSET in props.conf file was suggested, but should Splunk not recognize this è by default? And what would be the solution to get this recognized by Splunk by Default? Thanks in advance! Edwin    
Hi Everyone, I have below query by which i am extracting manager name,email etc. by applying join on managerno to all employee records since manager will also be a part of it.This query is taking 1... See more...
Hi Everyone, I have below query by which i am extracting manager name,email etc. by applying join on managerno to all employee records since manager will also be a part of it.This query is taking 120 sec to execute but i need to minimize and keep it under 20 seconds.Any help will be appreciated.   index="myid_transac_idx" sourcetype="myID_Identity" earliest=-1d@d latest=now() |fields employeeno display_name loginid email status managerno Termination_process_date |where status="Terminated" and Termination_process_date > "2022-10-01 00:00:00.00" |join type=LEFT managerno [ search index="myid_transac_idx" sourcetype="myID_Identity" earliest=-1d@d latest=now() |fields employeeno display_name loginid email status |rename employeeno as managerno |rename display_name as manager_name |rename loginid as managerloginid |rename email as manageremail |rename status as managerstatus]|fields employeeno display_name loginid email status managerno manager_name managerloginid manageremail managerstatus | table employeeno display_name loginid email status managerno manager_name managerloginid manageremail managerstatus  
I am trying to create a search which looks for an EventCode 4624 followed by another EventCode 4625 from same user, if someone could assist that would be fantastic. Having a read into Multisearch, jo... See more...
I am trying to create a search which looks for an EventCode 4624 followed by another EventCode 4625 from same user, if someone could assist that would be fantastic. Having a read into Multisearch, join etc. Attempted transaction but appears to be slow index=dirsvcs_seceventlogs source="wineventlog:security" EventCode=4625 [ search source="wineventlog:security" EventCode=4624 | table cs_username EventCode] | stats count, distinct_count(cs_username), values(cs_username) by EventCode
Team, Actually we want to install Amazon Kinesis Firehose for enterprise security, but after reading the splunkbase page upgraded the AWS add-on to 6.2.0. Now what inputs need to be given in the ... See more...
Team, Actually we want to install Amazon Kinesis Firehose for enterprise security, but after reading the splunkbase page upgraded the AWS add-on to 6.2.0. Now what inputs need to be given in the AWS add-on to fetch logs. Thanks in Advance!!
I want to be able to able to count the number of events and the median length of events per sourcetype in Splunk ? I'm trying to figure out the average/median size of evets for each sourcetype. ... See more...
I want to be able to able to count the number of events and the median length of events per sourcetype in Splunk ? I'm trying to figure out the average/median size of evets for each sourcetype. By size, I mean the charachter length of the raw events.  and then multiply the count of events with the median size to get an idea of what sourcetypes contain big events , so that I can use the data for event size reduction if that is possible.
I have a query like this: | dbxquery connection=xxxxx  query="select xxx FROM xxx WHERE xxx and to_char(LOG_DATE_TIME,'YYYY-MM-DD')='2022-10-13'" | iplocation SRC_IP | stats values(LOG_DATE_TIME) ... See more...
I have a query like this: | dbxquery connection=xxxxx  query="select xxx FROM xxx WHERE xxx and to_char(LOG_DATE_TIME,'YYYY-MM-DD')='2022-10-13'" | iplocation SRC_IP | stats values(LOG_DATE_TIME) as TIME dc(City) as countCity list(City) as city values(SRC_IP) as sourceIp by CIF USER_CD | eval time = strptime(TIME,"%Y-%m-%d %H:%M:%S.%3N") | eval differenceMinutes=(max(time)-min(time))/60 | fields - time | search countCity>1 AND differenceHours>1   The query displays the result like this:   I want it to have a result like this: Example: Jakarta - Bogor (LOG_DATE_TIME  - string type data)                  2022-10-13 09:03:33.539    -    2022-10-13 09:00:55.885 (already converted in timestamps version)      1665626613.539000      -       1665626455.885000                                                                                              =  158 (in seconds)                                                                                              then 158/60 = 2.633 minutes Bogor - Jakarta =  9.22 minutes Jakarta - Bogor = 360 minutes Bogor - Jakarta = 240 minutes   How should my query be, in order to achieve that result?
is there a way to configure the account for the add on TA-jira_issue_input via configuration file rather
I want to input into splunk the "events" of my fire alarms of all the branch offices. Is there a way I can manually create an index=firealarm and periodically fill fields I will create such as: dat... See more...
I want to input into splunk the "events" of my fire alarms of all the branch offices. Is there a way I can manually create an index=firealarm and periodically fill fields I will create such as: date: 26 october branch: 01 alarmid: 125 reason: smoking etc... I will add new events everytime an alarm is triggered. I know I can do this in excel, but I want to store these data on Splunk and build dashboards too
Hello I am having the following query:  index=*  "There was an error trying to process" | table _raw logs _raw 1 2022-10-25 22:10:59.937 ERROR 1 --- [rTaskExecutor-1] c.s.s.service.... See more...
Hello I am having the following query:  index=*  "There was an error trying to process" | table _raw logs _raw 1 2022-10-25 22:10:59.937 ERROR 1 --- [rTaskExecutor-1] c.s.s.service.InboundProcessingFlow : There was an error trying to process PPositivePay121140399F102520220942.20221025094304862.ach from Inbox. 2 2022-10-25 22:10:57.824 ERROR 1 --- [rTaskExecutor-1] c.s.s.service.InboundProcessingFlow : There was an error trying to process FPositivePay121140399Q102420222215.20221024221617018.ach from Inbox. 3 2022-10-25 22:10:57.824 ERROR 1 --- [rTaskExecutor-2] c.s.s.service.InboundProcessingFlow : There was an error trying to process FPositivePay121140399W102520220113.20221025011346442.ach from Inbox. 4 2022-10-25 22:11:53.729 ERROR 1 --- [rTaskExecutor-2] c.s.s.service.InboundProcessingFlow : There was an error trying to process PPositivePay121140399Q102420222215.20221024221617018.ach from Inbox. I would need to alter the search query so that the output is becoming:  Time                             file_name 2022-10-25 15:10:49 PPositivePay121140399F102520220942.20221025094304862.ach 2022-10-25 15:10:59 FPositivePay121140399Q102420222215.20221024221617018.ach 2022-10-25 15:11:09 FPositivePay121140399W102520220113.20221025011346442.ach 2022-10-25 15:11:14 PPositivePay121140399Q102420222215.20221024221617018.ach   Thanks @gcusello 
I would like a non Splunk Cloud admin user to be able to configure the inputs of the add-on: Splunk Add-on for Java Management Extensions. Currently, he got an 403 error when he goes to /app/Splunk_T... See more...
I would like a non Splunk Cloud admin user to be able to configure the inputs of the add-on: Splunk Add-on for Java Management Extensions. Currently, he got an 403 error when he goes to /app/Splunk_TA_jmx/configuration. What capabilities is required to configure this add-on? Thank you.   
Title may be a bit confusing, so here's an example of what I'm trying to achieve: I want to convert a table that looks like this: _time user action 2022-01-01 10:00:00 user_1 login 202... See more...
Title may be a bit confusing, so here's an example of what I'm trying to achieve: I want to convert a table that looks like this: _time user action 2022-01-01 10:00:00 user_1 login 2022-01-01 10:00:10 user_2 login 2022-01-01 11:30:20 user_1 logout 2022-01-01 11:40:00 user_1 login 2022-01-01 12:00:00 user_1 logout 2022-01-01 12:01:00 user_2 logout   Into this: user login_time logout_time user_1 2022-01-01 10:00:00 2022-01-01 11:30:20 user_2 2022-01-01 10:00:10 2022-01-01 12:01:00 user_1 2022-01-01 11:40:00 2022-01-01 12:00:00  
I'm trying to redact the description field from the Service WinHostMon to have something like that: Before:       Type=Service Name="LoremIpsum" DisplayName="Lipsum service" Description="Bl... See more...
I'm trying to redact the description field from the Service WinHostMon to have something like that: Before:       Type=Service Name="LoremIpsum" DisplayName="Lipsum service" Description="Bla bla bla bla bla." Path="C:\path\to\software.exe" ServiceType="Unknown" StartMode="Manual" Started=false State="Stopped" Status="OK" ProcessId=123         After:       Type=Service Name="LoremIpsum" DisplayName="Lipsum service" Description="redacted" Path="C:\path\to\software.exe" ServiceType="Unknown" StartMode="Manual" Started=false State="Stopped" Status="OK" ProcessId=123         I have a Windows host running Splunk UF, which then sends the data to a Splunk HF, which then sends it to Splunk Cloud. In the Splunk HF I already tried 2 approaches, both failed:   Approach 1: Splunk HF > system/local/props.conf       [source::service] SEDCMD-redact=s/\/Description=.+\n/\/Description="redacted"\n/g         Approach 2: Splunk HF > system/local/props.conf       [source::service] TRANSFORMS-my_transf = remove-desc         Splunk HF > system/local/transforms.conf       [remove-desc] REGEX = (?mi)((?:.|\n)+Description=).+(\n(?:.|\n)+) FORMAT = $1"redacted"$2 DEST_KEY = _raw         So, how can I redact the description field?
I am having a brain fart on trying to figure out how to find the total bytes per application and the the percent of each app by total bytes. For example: app bytes in GB percentage SSL 300... See more...
I am having a brain fart on trying to figure out how to find the total bytes per application and the the percent of each app by total bytes. For example: app bytes in GB percentage SSL 300GB 23% DNS 100GB 13% etc etc etc   Current search is this:   index=foo | eventstats sum(bytes) as total_bytes | stats sum(bytes) as total first(total_bytes) as total_bytes by app | eval CompliancePct=round(total/total_bytes,2)   Any help would be appreciated