All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I am looking for the help for the Event logs report if threshold match. I tried both way with creating a report and alert. but it either send me logs using |table _time, _raw  method o... See more...
Hi Team, I am looking for the help for the Event logs report if threshold match. I tried both way with creating a report and alert. but it either send me logs using |table _time, _raw  method or sending count using |stats count | where count >0 I need to schedule last 24hrs data  report like, only if there is a event  at 00:00 AM. Please guide me  Thank you
current splunk log: user=a,ip=b,info={'gender':1,'Country':2},p=1, target splunk table:  user=a,ip=b,gender=1,Country=2,p=1,  
Hello everyone and thanks in advance. I'm trying to make a search for file deletion but it isn't working. Do you have any example of a use case? I tested using sysmon but when I delete a file I c... See more...
Hello everyone and thanks in advance. I'm trying to make a search for file deletion but it isn't working. Do you have any example of a use case? I tested using sysmon but when I delete a file I can't see event 23.
Hi, I need to create an index called "assets" from a JSON data file that I have. However, wen I try and create such an index and navigate to the given data file, I receive the following error: ... See more...
Hi, I need to create an index called "assets" from a JSON data file that I have. However, wen I try and create such an index and navigate to the given data file, I receive the following error: The index in question does not currently exist on my Splunk instance and I am trying to create a new index and populate this index with this data. Can you please help?   Thanks.
Hi all, We have an application which produces logfiles where other logfiles are inserted (they are pulled from stdout when the other program is executed). We are only interested in the stdout that ... See more...
Hi all, We have an application which produces logfiles where other logfiles are inserted (they are pulled from stdout when the other program is executed). We are only interested in the stdout that is generated by SQL statements of another program, which are multiline entries themselves in a specific format. So basically an SQL event starts with a date and ends with the next date of an SQL event. We have a RegEx which captures all the SQL lines we are interessted in, but we cannot see a way to ignore the rest that is contained in the logfile, since all routing to nullQueue or SEDCMD takes place after timestamp recognition and event breaking and those other entries are either messing up the event breaking or are attached to the SQL events if we specify a timeconfig which only matches the SQL statements. Basically what needs to be done is that all lines not matching ^(\d+|\t+|\s\s+|CREATE|SELECT|DROP|UPDATE|INSERT|FROM|TBLPROPERTIES|\)).* need to be excluded before any timestamp recognition or eventbreaking is applied. To make it clear again. The problem is that all events, also those we want to get rid of are multiline events with different start and end and the date for the eventtypes are specified in different locations and format, hence the exclusion must occur before merging takes place. Is this possible?  Regards
OK I think I know what it is Splunk Search Runtime, but I have not ever thought what values or insights can this feature give. Today I decided to check my Splunk Cloud health and search usage statist... See more...
OK I think I know what it is Splunk Search Runtime, but I have not ever thought what values or insights can this feature give. Today I decided to check my Splunk Cloud health and search usage statistics (just for curious) and noticed that of some searches "search runtime" is kinda very long like for 15 minutes and more, but if I run those searches it usually takes me several seconds. So why it is showing in statistics that it was running for 15 min and more? Can someone explain? Thanks.
Have these functions been deprecated? If yes, any alternatives?  
Hi all, We are creating episodes and incidents are getting created in SNOW , the incident number is available in Activity tab of the episode review but not in the Impact tab. could you please help ... See more...
Hi all, We are creating episodes and incidents are getting created in SNOW , the incident number is available in Activity tab of the episode review but not in the Impact tab. could you please help us how to resolve this issue? Thanks, Nivetha S
i need to extract one field whichis not having as field value pair and i have to distinguish the logs based on that particular field. Here is the example log: {"log":"[10:30:04.075] [INFO ] [] [c... See more...
i need to extract one field whichis not having as field value pair and i have to distinguish the logs based on that particular field. Here is the example log: {"log":"[10:30:04.075] [INFO ] [] [c.c.n.b.i.DefaultBusinessEventService] [akka://MmsAuCluster/system/sharding/notificationAuthBpmn/4/nmT9K3rySjyoHHzxO9jHnQ_4/nmT9K3rySjyoHHzxO9jHnQ] - method=prepare; triggerName=approvalStart, entity={'id'='0f86c9007ff511ed82ffd13c4d1f79a9a07ff511ed82ffd13c4d173b0a','eventCode'='approval','paymentSystemId'='MMS','servicingAgentBIC'='null','messageIdentification'='0f86ff511ed82ffd13c4d173b0a','businessDomainName'='Mandate','catalogCode'='AN','functionCode'='APAL_INTERACTION'} Above log is the example here i have extracted other fields in log which has field value pairs like triggername,eventcode and all. But i need to filter log for "c.c.n.b.i.DefaultBusinessEventService" and info logs. Can anyone help me out ..how to filter logs based on above information. thanks in advance
I took over an established Splunk ecosystem when the main support admin retired.  I noticed that not all of our stand alone Search Heads, and both Deployment Servers are setup to forward to all 12 of... See more...
I took over an established Splunk ecosystem when the main support admin retired.  I noticed that not all of our stand alone Search Heads, and both Deployment Servers are setup to forward to all 12 of our Indexers in a single multi site Indexer Cluster (see table below) Some Search Heads in their outputs.conf only list the six Indexers that are assigned to Site 1, while the other Search Heads in their outputs.conf only list the six Indexers assigned to Site 2 However all Search Heads in their server.conf file have this stanza: [clustering] multisite = true So the question is should all of our Splunk Instances aka Search Heads, Cluster Master, and Deployment Servers have all 12 Indexers defined in their outputs.conf ?    Site 1 Site 2 Indexer01 Indexer07 Indexer02 Indexer08 Indexer03 Indexer09 Indexer04 Indexer10 Indexer05 Indexer11 Indexer06 Indexer12    
Hi team, I want to compare two results every week and display the differences from one index. And I want create Jira ticket if the results are different. Thanks
Recently, our on-prem deployment has been crashing, as our server's memory limit is being reached. After looking into this, I noticed that python2 and python3 are each consuming 10+ GB of RAM. Once ... See more...
Recently, our on-prem deployment has been crashing, as our server's memory limit is being reached. After looking into this, I noticed that python2 and python3 are each consuming 10+ GB of RAM. Once Splunk DB Connect is disabled, the memory is released from those two python/python3 processes. Has anyone experienced something like this before and, if so, is there anything I can do to troubleshoot which DB inputs might be causing the problem?
HI! My Dashboard studio dateime looks strange T. T  [Dashboard Studio View ↓ ] name datetime count tom 2022-12-01T09:00:00:00+09:00 10 jenny 2022-12-01T09:00:00:00+09:... See more...
HI! My Dashboard studio dateime looks strange T. T  [Dashboard Studio View ↓ ] name datetime count tom 2022-12-01T09:00:00:00+09:00 10 jenny 2022-12-01T09:00:00:00+09:00 15   The time comes out weird like the table above, so if you look at the detailed view, it looks like the table below. [Dashboard detail View ↓ ] name datetime count tom 2022-12-01 10 jenny 2022-12-01 15   How do I get the datetime to look good in the Dashboard studio? help me.. T. T
We are using Splunk Clockify add-on 2.0.1 in a standalone instance and enabled inputs "user" with an API key. After enabling the inputs we could see only 51 users in the search result. The actual cou... See more...
We are using Splunk Clockify add-on 2.0.1 in a standalone instance and enabled inputs "user" with an API key. After enabling the inputs we could see only 51 users in the search result. The actual count of the user is 70. - I verified the ta_clockify_add_on_for_splunk_ClockifyUsers.log file and don't see any ERROR/WARN/FAILED messages. - Is there any restriction to fetching all the user data from the Clockify add-on? - Do we need to add/update any settings in the input stanza? - The log collection interval I have set is 60 sec. Can someone please help me with this?  
Sample logs: quotation-events~~IM~. ABC~CA~Wed Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events D0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 ... See more...
Sample logs: quotation-events~~IM~. ABC~CA~Wed Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events D0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IS~;S. ABC~CA~Tue Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events V0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IM~. ADC~BA~Sat Jan 01 13:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events B0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IM~. CCC~HA~Sun Jan 01 20:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events G0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 Output in splunk: All evets are coming as a single event and not coming completely. D0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 IS~;S. ABC~CA~Tue Jan 02 23:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events V0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1| quotation-events~~IM~. ADC~BA~Sat Jan 01 13:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events B0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 quotation-events~~IM~. CCC~HA~Sun Jan 01 20:24:56 EST   2023~A~0.12~0...~2345.78~SM~quotation-events quotation-events~~ quotation-events~~ G0C5A044~~AB~DFR~Mon Jan 01 12:52:14 EST   2022~B~107.45~106.90~123.09~T~2345A1 props.conf [app:logs:sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n)]+w{8}~~|quotation-events~~ NO_BINARY_CHECK=true CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=75 disabled=false TIME_FORMAT=%a %b %d %H:%M:%S %Z TIME_PREFIX=(?:[^~]+~)~(?:[^~]+~){3} TRUNCATE=99999 ANNOTATE_PUNCT=false
Need help with Regex field ------------------------feildvalue servername ---------- xtestf100s log_level--------------INFO OR error or warning message ------------ anything from gofer till end ... See more...
Need help with Regex field ------------------------feildvalue servername ---------- xtestf100s log_level--------------INFO OR error or warning message ------------ anything from gofer till end Jan 3 03:50:38 xtestf100s goferd: [INFO][worker-0] gofer.messaging.adapter.connect:28 - connecting: proton+amqps://xtest123s.pharma.aventis.com:5647 Jan 3 03:50:38 xtestf100s goferd: [INFO][worker-0] gofer.messaging.adapter.proton.connection:87 - open: URL: amqps://xtest123s.pharma.aventis.com:5647|SSL: ca: /etc/rhsm/ca/katello-default-ca.pem|key: None|certificate: /etc/pki/consumer/bundle.pem|host-validation: None Jan 3 03:50:38 xtestf100s goferd: [ERROR][worker-0] gofer.messaging.adapter.connect:33 - connect: proton+amqps://xtest123s.pharma.aventis.com:5647, failed: Connection amqps://xtest123s.pharma.aventis.com:5647 disconnected: Condition('proton.pythonio', 'Connection refused to all addresses') Jan 3 03:50:38 xtestf100s goferd: [INFO][worker-0] gofer.messaging.adapter.connect:35 - retry in 106 seconds Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566412): avc: denied { read } for pid=75981 comm="ip" name="libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566413): avc: denied { open } for pid=75981 comm="ip" path="/opt/commvault/Base64/libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566414): avc: denied { getattr } for pid=75981 comm="ip" path="/opt/commvault/Base64/libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:50:54 xtestf100s kernel: type=1400 audit(1672714254.276:566415): avc: denied { execute } for pid=75981 comm="ip" path="/opt/commvault/Base64/libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1 Jan 3 03:51:43 xtestf100s kernel: type=1400 audit(1672714303.392:566416): avc: denied { read } for pid=77988 comm="ip" name="Base" dev="dm-13" ino=116 scontext=system_u:system_r:ifconfig_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=lnk_file permissive=1 Jan 3 03:51:43 xtestf100s kernel: type=1400 audit(1672714303.392:566417): avc: denied { read } for pid=77988 comm="ip" name="libCvDllFilter.so" dev="dm-13" ino=393745 scontext=system_u:system_r:ifconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1
Hi, I have a dashboard with a column chart to display time ranges. I have set the drilldown to update the $timePicker.earliest$ and $timePicker.latest$ tokens within the dashboard. This works perfe... See more...
Hi, I have a dashboard with a column chart to display time ranges. I have set the drilldown to update the $timePicker.earliest$ and $timePicker.latest$ tokens within the dashboard. This works perfectly and there are no issues here.  The issue is that the display of the time picker does not change, it always just says "All time" even though the tokens are set to a one-day range.  As I mentioned, the whole dashboard recognizes the one-day range, all I need is the Time Picker display to also show this range. I have tried setting the default time range using the $timePicker.earliest$ and $timePicker.latest$ values, as you can do with a text input, but I have had no luck. This is the code I have tried but didn't work. This is the code for the column chart drilldown, which does work and changes the timePicker tokens appropriately.   This is the default screenshot when the app is loaded   This is a screenshot after a column is clicked in the bar chart. Note that everything changes due to the time picker tokens being changed, but the time picker display is not updated and still says "All Time". Additionally, when I open the events in a search after I have selected a time column, the time picker in the search is displayed correctly. This is how I would expect the time picker in my App/Dashboad to display.   Pleease help! Thanks, John
I'm trying to use the following search to capture information regarding an identification code:   index=calabrio MSG_VOICERECORDING_NOTIFY:SRC_NOTIFY_NO_PACKETS | rex field=_raw "Filename(?<phone... See more...
I'm trying to use the following search to capture information regarding an identification code:   index=calabrio MSG_VOICERECORDING_NOTIFY:SRC_NOTIFY_NO_PACKETS | rex field=_raw "Filename(?<phoneid>)(?=[A-Z][A-Z][A-Z]).*(?=-)" | stats count by phoneid   Here an example of the log entry: 2023-01-04 15:08:09.001175 DEBUG [0xce4] VoiceRecorderUpdateTask.cpp[28] VoiceRecorderUpdateTask::runTask: MSG_VOICERECORDING_NOTIFY:SRC_NOTIFY_NO_PACKETS : Filename(4281-1672873674000-4125-SEP12345678-98962688) I want to capture the information from the 4th stanza.  I'm trying to use lookahead to target the three alpha characters.  This works as expected in regex101.com but Splunk is not producing any results.  I've read in several articles that lookahead doesn't work as you would expect it to but I haven't been able to piece together a search that will work.  Maybe I'm going about this the wrong way.  Any help is appreciated. Thanks, Mike
Search: |tstats count where index=att_acc_app source=applicationissues.log by PREFIX(client_application_name=) _time span=1d |rename client_application_name= as client-application-name |timechart... See more...
Search: |tstats count where index=att_acc_app source=applicationissues.log by PREFIX(client_application_name=) _time span=1d |rename client_application_name= as client-application-name |timechart count by client-application-name span=1d When i am using this query i am not getting accurate results.
We have a problem with custom metrics, when we remove custom metrics, always appear when we gonna create a graphic, in the same way some custom metrics don't appear when we create it. when we send t... See more...
We have a problem with custom metrics, when we remove custom metrics, always appear when we gonna create a graphic, in the same way some custom metrics don't appear when we create it. when we send the problem to cisco, Cisco refresh the platform for fix the problem, but don't exist a faster way for this? because wait until the cisco take action is taken weeks thak you