All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Generally, you should avoid using SHOULD_LINEMERGE=true whenever you can. In your case it seems  like something like this (along with SHOULD_LINEMERGE=false) should work LINE_BREAKER = ^REMARK[^\r\... See more...
Generally, you should avoid using SHOULD_LINEMERGE=true whenever you can. In your case it seems  like something like this (along with SHOULD_LINEMERGE=false) should work LINE_BREAKER = ^REMARK[^\r\n]+([\r\n]+)@ID
Regardless of actually rendering it in your dashboard, if you have dynamically created set of fields, you can use the foreach command. Like this (a run-anywhere example | makeresults | eval Agent1... See more...
Regardless of actually rendering it in your dashboard, if you have dynamically created set of fields, you can use the foreach command. Like this (a run-anywhere example | makeresults | eval Agent1=0,Agent2=1 | foreach "Agent*" [ eval <<FIELD>>=if (<<FIELD>>==1,"✓","x")] The downside of the foreach command is that it's tricky with spaces within field names.
While the blacklist format might not be compatible with the XML event format, that should not cause decrease of the number of events, quite the contrary. I'd check firstly whether your overall numbe... See more...
While the blacklist format might not be compatible with the XML event format, that should not cause decrease of the number of events, quite the contrary. I'd check firstly whether your overall number of events (not just bursts) indeed did decrease. In other words - are you indeed losing events or are are they by any chance getting "choked" but finally get through in shorter but higher-thruput bursts.  
It was circa 7.3 the last time I wrote one, but at the time, I don't think Splunk would run them from an app dir. I resorted to strace to confirm. I had a run-at-startup scripted input that would syn... See more...
It was circa 7.3 the last time I wrote one, but at the time, I don't think Splunk would run them from an app dir. I resorted to strace to confirm. I had a run-at-startup scripted input that would sync the alert action scripts to $SPLUNK_HOME/bin/scripts. Would I have this much fun without the workarounds?
No, wait. The timechart works with time automatically. You don't add "by DateOnly" because then it will treat your DateOnly field as a categorizing field. | timechart span=1h count by DateOnly Thi... See more...
No, wait. The timechart works with time automatically. You don't add "by DateOnly" because then it will treat your DateOnly field as a categorizing field. | timechart span=1h count by DateOnly This will count how many values _for each value of DateOnly field_ is per each span (in your case - per each hour). See this run-anywhere example: | makeresults count=2 | streamstats count | eval _time=_time-count*7200 | fields - count This will give you two timestamps - one two hours ago and one for hours ago. If you simply do | timechart span=1h count by hour You'll get a decent result showing you that for each of those hours you got one event. Which is OK. But if you do somehing akin to what you did before which means your whole example would look like this: | makeresults count=2 | streamstats count | eval _time=_time-count*7200 | fields - count | eval DateHour=strftime(_time,"%H") | timechart span=1h count by DateHour Your results will turn to this (I ran this at 13:58): _time 09 11 2023-12-05 09:00 1 0 2023-12-05 11:00 0 1 Because within the 9-10 hour you have your DateHour of "09" and no encounters of value "11" there (hence the corresponding counts. And within the hour 11-12, you have 0 and 1 counts. So if you want to have your timechart with the time formatted properly, you don't add the "by DateTime" part. You simply do | timechart span=1h count And only _then_ you format your time to the way you want to display it. For example | fieldformat _time=strfile(_time,"%H")
@richgalloway @inventsekar , Could you pls help me in extracting the fields from below events . 2023-12-05 07:57:02,995 [CID:] [C:] [TID:PriorityScheduler Elastic Thread @ Normal] VERBOSE Thycoti... See more...
@richgalloway @inventsekar , Could you pls help me in extracting the fields from below events . 2023-12-05 07:57:02,995 [CID:] [C:] [TID:PriorityScheduler Elastic Thread @ Normal] VERBOSE Thycotic.Discovery.Sources.Scanners.PowershellDiscoveryScanner - Value: xxx.com - (null) 2023-12-05 07:57:02,991 [CID:] [C:] [TID:PriorityScheduler Elastic Thread @ Normal] VERBOSE Thycotic.Discovery.Sources.Scanners.PowershellDiscoveryScanner - Name: xxx - (null) 2023-12-05 07:57:02,986 [CID:] [C:] [TID:PriorityScheduler Elastic Thread @ Normal] VERBOSE 
OK. I think I see where it is going. You have your data as JSON structure and want to search it calling the fields by names in the base search and it doesn't work. But it will parse your fields if y... See more...
OK. I think I see where it is going. You have your data as JSON structure and want to search it calling the fields by names in the base search and it doesn't work. But it will parse your fields if you search for your events another way (for example just by searching for the content, regardless of where in the event it is) and then pushing it through the spath command. Am I right? In other words - your events are not automatically interpreted as JSON structures. There are three separate levels on which Splunk can handle JSON data. 1. On ingest - it can treat the JSON with INDEXED_EXTRACTIONS and parse your data into indexed fields. You generally don't want that as indexed fields are not really what Splunk is typically about. 2. Manual invocation of spath command - that can be useful if you have your json data as only a part of your whole event (for example - json structure forwarded as a syslog message and prepended with a syslog header; in such case you'd want to cut extract the part after syslog header and manually call the spath command to extract fields from that part). 3. Automatic search-time extraction - it's triggered by proper configuration of your sourcetype. By default, unless explicitly disabled by setting AUTO_KV_JSON to false, Splunk will extract your json fields when (and only then) the whole _raw event is a well-formed json structure. JSON extraction can be also (still, only when the whole event is a well-formed json) explicitly triggered by properly configuring KV_MODE in your sourcetype. Mind you that netiher 1st nor the 3rd option will extract data if you have - for example - a JSON structure as a string field within another json structure - in such case you have to manually use spath to extract the json data from such string. So - as you can see - json is a bit tricky to work with. PS: There is an open idea about extracting only part of the event as json structure - feel free to support that https://ideas.splunk.com/ideas/EID-I-208
Hello all, I am going to upgrade to Splunk to version 9.1.x. Inside my app I use lot of  JS scripts . When im performing the jquery scan, I get the below errors messages : This /opt/splunk/... See more...
Hello all, I am going to upgrade to Splunk to version 9.1.x. Inside my app I use lot of  JS scripts . When im performing the jquery scan, I get the below errors messages : This /opt/splunk/etc/apps/biz_svc_insights/appserver/static/jQueryAssets/ExtHCJS.js is importing the following dependencies which are not supported or externally documented by Splunk.  highcharts This /opt/splunk/etc/apps/biz_svc_insights/appserver/static/node_modules/requirejs/bin/r.js is importing the following dependencies which are not supported or externally documented by Splunk.  requirejs logger Can anyone please help me on this error ? Any hints are appreciated. Kind regards, Rajkumar Reddi .
Hello, I am creating a dashboard (Simple XML) with a table panel as shown below: This is actually a dashboard for Telephony System and number of columns (and names, of course) will be changed b... See more...
Hello, I am creating a dashboard (Simple XML) with a table panel as shown below: This is actually a dashboard for Telephony System and number of columns (and names, of course) will be changed based on which agents are logged in at a time. For example, at 9 AM: Queue, Agent 1, Agent 4, Agent 9 at 3 PM: Queue, Agent 1, Agent 4, Agent 5, Agent 11 at 1 AM: Queue, Agent 5, Agent 9, Agent 11 Now, in this table panel, I want to replace 1 with Green Tick and 0 with Red Cross in all the columns.  Can you please suggest how this can be achieved? I have tried this using eval and replace but as columns are dynamic, I am unable to handle this. Thank you. Edit: Sample JSON Event: { AAAA_PMC_DT: 05-Dec-2023 13:04:34 Agent: Agent 1 Block: RTAgentsLoggedIn Bound: in Queue(s):: Queue 1, Queue 3, Queue 4, Queue 5, Queue 7, Queue 10 } SPL: index="telephony_test" Bound=in Block=RTAgentsLoggedIn _index_earliest=-5m@m _index_latest=@s | spath "Agent" | spath "Queue(s):" | spath "On pause" | spath AAAA_PMC_DT | fields "Agent" "Queue(s):" "On pause" AAAA_PMC_DT | rename "Queue(s):" as Queue, "On pause" as OnPause, AAAA_PMC_DT as LastDataFetch | eval _time=strptime(LastDataFetch,"%d-%b-%Y %H:%M:%S") | where _time>=relative_time(now(),"-300s@s") | where NOT LIKE(Queue,"%Outbound%") | sort 0 -_time Agent | dedup Agent | eval Queue=split(Queue,", ") | table Agent Queue | mvexpand Queue | chart limit=0 count by Queue Agent  
LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202309260081340532.21 @ID............ 20230926... See more...
LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202309260081340532.21 @ID............ 202309260081340532.21 PROTOCOL.ID.... 202309260081340532.21 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:32:934 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340523.16 @ID............ 202309260081340523.16 PROTOCOL.ID.... 202309260081340523.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340465.12 @ID............ 202309260081340465.12 PROTOCOL.ID.... 202309260081340465.12 PROCESS.DATE... 20230926 TIME.MSECS..... 11:14:25:781 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ AUTHORISER-8232 @ID............ AUTHORISER-8232 PROTOCOL.ID.... AUTHORISER-8232 PROCESS.DATE... 20230926 TIME.MSECS..... 09:08:19:962 K.USER......... AUTHORISER APPLICATION.... PGM.BREAK LEVEL.FUNCTION. 1 ID............. LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 2 11:34:02 23 NOV 2023 REMARK......... @ID............ 202309260081340530.06 @ID............ 202309260081340530.06 PROTOCOL.ID.... 202309260081340530.06 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:30:223 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309269535047401.01 @ID............ 202309269535047401.01 PROTOCOL.ID.... 202309269535047401.01 PROCESS.DATE... 20230926 TIME.MSECS..... 13:10:01:201 K.USER......... INPUTTER APPLICATION.... DRAWINGS LEVEL.FUNCTION. 1 I ID............. REMARK......... @ID............ 202309260081340469.10 @ID............ 202309260081340469.10 PROTOCOL.ID.... 202309260081340469.10 PROCESS.DATE... 20230926 TIME.MSECS..... 11:14:29:654 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340490.06 @ID............ 202309260081340490.06 PROTOCOL.ID.... 202309260081340490.06 PROCESS.DATE... 20230926 TIME.MSECS..... 11:14:50:299 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 3 11:34:02 23 NOV 2023 LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340509.05 @ID............ 202309260081340509.05 PROTOCOL.ID.... 202309260081340509.05 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:09:201 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340529.00 @ID............ 202309260081340529.00 PROTOCOL.ID.... 202309260081340529.00 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:29:015 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202310033834745376.01 @ID............ 202310033834745376.01 PROTOCOL.ID.... 202310033834745376.01 PROCESS.DATE... 20230926 TIME.MSECS..... 12:36:16:380 K.USER......... ASHWIN.KUMAR APPLICATION.... CATEGORY LEVEL.FUNCTION. 1 S ID............. REMARK......... @ID............ 202309260081340496.06 @ID............ 202309260081340496.06 PROTOCOL.ID.... 202309260081340496.06 PROCESS.DATE... 20230926 TIME.MSECS..... 11:14:56:370 LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 4 11:34:02 23 NOV 2023 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202310031395145227.00 @ID............ 202310031395145227.00 PROTOCOL.ID.... 202310031395145227.00 PROCESS.DATE... 20230926 TIME.MSECS..... 12:33:47:173 K.USER......... ASHWIN.KUMAR APPLICATION.... SIGN.ON LEVEL.FUNCTION. ID............. REMARK......... @ID............ TEST1-70226 @ID............ TEST1-70226 PROTOCOL.ID.... TEST1-70226 PROCESS.DATE... 20230926 TIME.MSECS..... 12:52:55:808 K.USER......... TEST1 APPLICATION.... PGM.BREAK LEVEL.FUNCTION. 1 ID............. REMARK......... @ID............ 202309264115451975.00 @ID............ 202309264115451975.00 PROTOCOL.ID.... 202309264115451975.00 PROCESS.DATE... 20230926 TIME.MSECS..... 14:26:15:315 K.USER......... INPUTTER APPLICATION.... ENQUIRY.SELECT LEVEL.FUNCTION. 1 ID............. TRADE.POS.VALUATION_BH0010001_INPUTTER REMARK......... 1
You're trying to cut corners here. Depending on your definition of "year", the issue is much more complicated than you think. If you take  year deifned as 31557600 seconds (365.25 days), it will give... See more...
You're trying to cut corners here. Depending on your definition of "year", the issue is much more complicated than you think. If you take  year deifned as 31557600 seconds (365.25 days), it will give you a "weird" result - a year after a midnight Jan 1st 2023 will be 6:00AM Jan 1st 2024. If you mean a year as 365 days, you'll get your Jan 1st 2024 + 1 year being Dec 31st. 2024. Again - not what some people would expect. If you mean a year as in "the year number in the date changed", it's getting even more complicated. Date manipulation is always a huge pain in the lower part of your back. That's probably why the "duration" formatting only goes to days, because that's pretty straightforward and unambigous. Of course going from "366+00:30:00.00000" to "366 days and 30 minutes" is relatively easy to do - just throw in some regexes and replace one parts of text with another. But extracting that year part... that's also gonna be easy as soon as you know what you want. Which - as I wrote earlier - might not be that easy. And you have to account for border cases (leap years, let's assume for now that we don't have leap seconds :D)
Hello, I'm integrating the .txt file in Splunk, however while integrating the file my events are breaking into single line not all events but many of them are breaking into single line. Attaching ... See more...
Hello, I'm integrating the .txt file in Splunk, however while integrating the file my events are breaking into single line not all events but many of them are breaking into single line. Attaching the log file in comments. Below is how my data is appearing on Splunk when I add this txt file into Splunk. Is there any way I can limit the starting and ending point of my event. I want my data to be started from @ID and ends on REMARK.    And if I use regex "(@ID[\s\S]*?REMARK[\s\S]*?)(?=@ID|$)" while adding the data, many of my logs are getting missing attaching the snapshot of it also. not sure how to resolve this issue,  if anyone can know how i can integrate this .txt file to get my event start from (@ID to REMARK)    
Ahh, right. The alert actions, not sripted input. But still, the docs say it's OK with placing them in an app (and that makes sense - you push alert actions, for example for ES, with deployer onto yo... See more...
Ahh, right. The alert actions, not sripted input. But still, the docs say it's OK with placing them in an app (and that makes sense - you push alert actions, for example for ES, with deployer onto your SHC. alert.execute.cmd = <string> * For custom alert actions, explicitly specifies the command to run when the alert action is triggered. This refers to a binary or script in the 'bin' folder of the app that the alert action is defined in, or to a path pointer file, also located in the 'bin' folder. * If a path pointer file (*.path) is specified, the contents of the file is read and the result is used as the command to run. Environment variables in the path pointer file are substituted. * If a python (*.py) script is specified, it is prefixed with the bundled python interpreter.
Create a new token in the change handler for the timepicker which is based on an hour difference to the timepicker earliest value. If you start changing the timepicker itself, this will be seen as a ... See more...
Create a new token in the change handler for the timepicker which is based on an hour difference to the timepicker earliest value. If you start changing the timepicker itself, this will be seen as a  change which will then add another hour, which will be seen as a change, and so on.
| where _time > relative_time(now(),"-4w@w+1d")
(sourcetype=bmw-crm-wh-sl-sfdc-subscribe-pe-int-api ("Received platform event for CUSTOMER")) OR (sourcetype=bmw-pl-customer-int-api ("recipient : *.ESOCRM")) | stats values by properties.correlationId
What else I can do to get the correlationId in one table as this query is comparing and giving the common results. (sourcetype=bmw-crm-wh-sl-sfdc-subscribe-pe-int-api ("Received platform event for ... See more...
What else I can do to get the correlationId in one table as this query is comparing and giving the common results. (sourcetype=bmw-crm-wh-sl-sfdc-subscribe-pe-int-api ("Received platform event for CUSTOMER")) | table properties.correlationId | join left=L right=R type=inner where L.properties.correlationId=R.properties.correlationId [search sourcetype=bmw-pl-customer-int-api ("recipient : *.ESOCRM") | table properties.correlationId] And can I again you join in this query.
Try something like this | makeresults format=csv data="StartTime,EndTime 2023-12-05 05:30:00.0000000,2023-12-05 08:00:00.0000000 2023-12-05 08:00:00.0000000,2023-12-05 09:30:00.0000000 2023-12-05 10... See more...
Try something like this | makeresults format=csv data="StartTime,EndTime 2023-12-05 05:30:00.0000000,2023-12-05 08:00:00.0000000 2023-12-05 08:00:00.0000000,2023-12-05 09:30:00.0000000 2023-12-05 10:28:00.0000000,2023-12-05 13:30:00.0000000" | eval row=mvrange(0,4) | mvexpand row | eval _time=case(row=0,strptime(StartTime,"%F %T.%6N"),row=1,strptime(StartTime,"%F %T.%6N"),row=2,strptime(EndTime,"%F %T.%6N"),row=3,strptime(EndTime,"%F %T.%6N")) | eval value=case(row=0,0,row=1,1,row=2,1,row=3,0) | table _time value Then use an area chart viz
It works! Thanks a lot.
Your join has already created a single table. However, you might want to consider including both sourcetypes and filters in the same initial search, then collate the events with a stats command.