All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Given that this doesn't appear to be wholly correct JSON, you could start with something like this | rex "DETAILS: (?<details>\[.*\])" | spath input=details
Look at my explanation above - your stdout field is not a json structure - it's a string containing a json structure so it cannot be automatically parsed as json structure. You have to take the stdou... See more...
Look at my explanation above - your stdout field is not a json structure - it's a string containing a json structure so it cannot be automatically parsed as json structure. You have to take the stdout field and manually use stdout on this field to parse out the fields from it.
Again - don't use the "by DateTime" clause. Do a normal timechart and then - if you want to wrap it by day, use the timewrap command.
I see... Well it seems like spath (and spl functionality in general) is working fine with the events, except for the contents in stdout... I spoke with an acquaintance and it looks like it's most li... See more...
I see... Well it seems like spath (and spl functionality in general) is working fine with the events, except for the contents in stdout... I spoke with an acquaintance and it looks like it's most likely due to the way the data is parsed before arriving to splunk. I can't thank you enough for your time and effort helping me!! It looks like this has to be checked outside of splunk tho, I'll close the ticket and come back with updates if I'm able to find a solution.
Hi, The beyond trust log fields are not getting extracted. I tried both Index time field extraction and Search time field extractions to extract the fields. Below are the sample logs. Dec 5 13:12... See more...
Hi, The beyond trust log fields are not getting extracted. I tried both Index time field extraction and Search time field extractions to extract the fields. Below are the sample logs. Dec 5 13:12:06 192.168.251.35 Agent Desc: "",Agent ID: "AppAudit",Agent Ver: "",Category: "PMM API SignAppIn",Source Host: "",Event Desc: "",Event Name: "Login",OS: "",Event Severity: "0",Source IP: "192.168.251.35",Event Subject: "106",Event Type: "0",User: "VPN-OTSA-EDMS-HANU",Workgroup Desc: "",Workgroup ID: "",Workgroup Location: "",AuditID: "2200680",ActionType: "Login",SystemName: "PMM API SignAppIn",AppUserID: "106",CreateDate: "12/5/2023 1:11:34 PM",UserName: "VPN-OTSA-EDMS-HANU",IPAddress: "192.168.251.35",Mapped Credential: "Primary",Mapped Credential Id: "2",Mapped Credential Description: "OFID-PS-Usersync",Mapped Credential Platform: "ActiveDirectory",Mapped Credential Domain/Server: "opecfund.org",Authenticate Credential Id: "2",Authenticate Credential UserName: "opecfund.org\OFID-PS-Usersync@opecfund.org",Authenticate Credential Description: "OFID-PS-Usersync",Authenticate Credential Platform: "ActiveDirectory",Domain Name: "opecfund.org",SAM Account Name: "VPN-OTSA-EDMS-HANU",Group: "opecfund.org\OFID-BTPRAPS-Vendor",Authentication Type: "Active Directory via API" Dec 5 13:12:06 192.168.251.35 Agent Desc: "",Agent ID: "AppAudit",Agent Ver: "",Category: "PMM API SignAppIn",Source Host: "",Event Desc: "",Event Name: "Login",OS: "",Event Severity: "0",Source IP: "192.168.251.35",Event Subject: "38",Event Type: "0",User: "SA-BTPSGlobalRequestor4SRAAPI",Workgroup Desc: "",Workgroup ID: "",Workgroup Location: "",AuditID: "2200678",ActionType: "Login",SystemName: "PMM API SignAppIn",AppUserID: "38",CreateDate: "12/5/2023 1:11:23 PM",UserName: "SA-BTPSGlobalRequestor4SRAAPI",IPAddress: "192.168.251.35",Authentication Type: "API" Dec 5 13:12:06 192.168.251.35 Agent Desc: "",Agent ID: "AppAudit",Agent Ver: "",Category: "PMM API Authentication Rule Failure",Source Host: "",Event Desc: "",Event Name: "Login",OS: "",Event Severity: "0",Source IP: "192.168.251.35",Event Subject: "-1",Event Type: "0",User: "opecfund.org\SA-BTPSGlobalRequestor4SRAAPI",Workgroup Desc: "",Workgroup ID: "",Workgroup Location: "",AuditID: "2200677",ActionType: "Login",SystemName: "PMM API Authentication Rule Failure",AppUserID: "-1",CreateDate: "12/5/2023 1:11:23 PM",UserName: "opecfund.org\SA-BTPSGlobalRequestor4SRAAPI",IPAddress: "192.168.251.35",Host Name: "SVR-BTPS01",User Name: "opecfund.org\SA-BTPSGlobalRequestor4SRAAPI",API Key: "****************************************************************************************************************************4416",IP Address: "192.168.251.35",Authentication Rule: "API Key",Message: "Invalid RunAs - UserNameOrPasswordAreIncorrect" Dec 5 13:11:06 192.168.251.35 Agent Desc: "",Agent ID: "AppAudit",Agent Ver: "",Category: "PMM API SignAppIn",Source Host: "",Event Desc: "",Event Name: "Login",OS: "",Event Severity: "0",Source IP: "192.168.251.35",Event Subject: "38",Event Type: "0",User: "SA-BTPSGlobalRequestor4SRAAPI",Workgroup Desc: "",Workgroup ID: "",Workgroup Location: "",AuditID: "2200675",ActionType: "Login",SystemName: "PMM API SignAppIn",AppUserID: "38",CreateDate: "12/5/2023 1:10:28 PM",UserName: "SA-BTPSGlobalRequestor4SRAAPI",IPAddress: "192.168.251.35",Authentication Type: "API". Can someone help here Below are the props and transform which i tried on Index time field extraction [beyondtrust] KV_MODE = none SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(\w{3} \d{1,2} \d{2}:\d{2}:\d{2} \d+\.\d+\.\d+\.\d+) NO_BINARY_CHECK = true REPORT-keyvaluepairs = keyvalue [keyvalue] REGEX = (\w+\s?\w+): "[^"]*" FORMAT = $1::$2 MV_ADD = true   Search time field extraction: [beyondtrust] EXTRACT-AgentDesc = Agent Desc: "(?P<Agent_Desc>[^"]+)" EXTRACT-AgentID = Agent ID: "(?P<Agent_ID>[^"]+)" EXTRACT-AgentVer = Agent Ver: "(?P<Agent_Ver>[^"]+)" EXTRACT-Category = Category: "(?P<Category>[^"]+)" EXTRACT-SourceHost = Source Host: "(?P<Source_Host>[^"]+)" EXTRACT-EventDesc = Event Desc: "(?P<Event_Desc>[^"]+)" EXTRACT-EventName = Event Name: "(?P<Event_Name>[^"]+)" EXTRACT-OS = OS: "(?P<OS>[^"]+)" EXTRACT-EventSeverity = Event Severity: "(?P<Event_Severity>\d+)" EXTRACT-SourceIP = Source IP: "(?P<Source_IP>[^"]+)" EXTRACT-EventSubject = Event Subject: "(?P<Event_Subject>[^"]+)" EXTRACT-EventType = Event Type: "(?P<Event_Type>\d+)" EXTRACT-User = User: "(?P<User>[^"]+)" EXTRACT-WorkgroupDesc = Workgroup Desc: "(?P<Workgroup_Desc>[^"]+)" EXTRACT-WorkgroupID = Workgroup ID: "(?P<Workgroup_ID>[^"]+)" EXTRACT-WorkgroupLocation = Workgroup Location: "(?P<Workgroup_Location>[^"]+)" EXTRACT-AuditID = AuditID: "(?P<Audit_ID>\d+)" EXTRACT-ActionType = ActionType: "(?P<Action_Type>[^"]+)" EXTRACT-SystemName = SystemName: "(?P<System_Name>[^"]+)" EXTRACT-AppUserID = AppUserID: "(?P<App_User_ID>[^"]+)" EXTRACT-CreateDate = CreateDate: "(?P<Create_Date>[^"]+)" EXTRACT-UserName = UserName: "(?P<UserName>[^"]+)" EXTRACT-IPAddress = IPAddress: "(?P<IPAddress>[^"]+)" EXTRACT-AuthenticationType = Authentication Type: "(?P<Authentication_Type>[^"]+)" EXTRACT-HostName = Host Name: "(?P<Host_Name>[^"]+)" EXTRACT-APIKey = API Key: "(?P<API_Key>[^"]+)" EXTRACT-IPAddress2 = IP Address: "(?P<IP_Address2>[^"]+)" EXTRACT-AuthenticationRule = Authentication Rule: "(?P<Authentication_Rule>[^"]+)" EXTRACT-Message = Message: "(?P<Message>[^"]+)"
Try something like this | table Etat, "Control-M", "Dynatrace", "ServicePilot", "Centreon" | fillnull value=0
Hi, I tried your solution but it didn't work. How do I modify the query to get the desired output. for below Query index="app_cleo_db"  origname="GEAC_Payroll*"  | rex "\sorigname=\"GEAC_Payroll\... See more...
Hi, I tried your solution but it didn't work. How do I modify the query to get the desired output. for below Query index="app_cleo_db"  origname="GEAC_Payroll*"  | rex "\sorigname=\"GEAC_Payroll\((?<digits>\d+)\)\d{8}_\d{6}\.xml\"" | search origname="*.xml" | eval Date = strftime(_time, "%Y-%m-%d %H:00:00") | eval DateOnly = strftime(_time, "%Y-%m-%d") | transaction DateOnly, origname | timechart span=1h count by DateOnly   I am getting below output time               2023-12-02  2023-12-03 2023-12-02 00:00  0           0 2023-12-02 01:00  0           0 2023-12-02 02:00  0           0 2023-12-02 03:00  0           0 2023-12-02 04:00  0           0 2023-12-02 05:00  0           0 2023-12-02 06:00  0           0 2023-12-02 07:00  1           0 2023-12-02 08:00  0           0 2023-12-02 09:00  0           0 2023-12-02 10:00  2           0 2023-12-02 11:00  1           0 2023-12-02 12:00  1           0 2023-12-02 13:00  1           0 2023-12-02 14:00  3           0 2023-12-02 15:00  4           0 2023-12-02 16:00  0           0 2023-12-02 17:00  0           0 2023-12-02 18:00  0           0 2023-12-02 19:00  0           0 2023-12-02 20:00  0           0 2023-12-02 21:00  0           0 2023-12-02 22:00  0           0 2023-12-02 23:00  0           0 2023-12-03 00:00  0           0 2023-12-03 01:00  0           0 2023-12-03 02:00  0           0 2023-12-03 03:00  0           0 2023-12-03 04:00  0           1 2023-12-03 05:00  0           3 2023-12-03 06:00  0           202 2023-12-03 07:00  0           52 2023-12-03 08:00  0           141 2023-12-03 09:00  0           188 2023-12-03 10:00  0           256 2023-12-03 11:00  0           185 2023-12-03 12:00  0           121 2023-12-03 13:00  0           52 2023-12-03 14:00  0           32 2023-12-03 15:00  0           9 2023-12-03 16:00  0           0 2023-12-03 17:00  0           0 2023-12-03 18:00  0           0 2023-12-03 19:00  0           0 2023-12-03 20:00  0           0 2023-12-03 21:00  0           0 2023-12-03 22:00  0           0 2023-12-03 23:00  0           0 but i want like below output like this where the 00:00 to 23:00 is stable time   2023-12-02  2023-12-03      00:00  0           0 01:00  0           0 02:00  0           0 03:00  0           0 04:00  0           1 05:00  0           3 06:00  0           202 07:00  1           52 08:00  0           141 09:00  0           188 10:00  2           256 11:00  1           185 12:00  1           121 13:00  1           52 14:00  3           32 15:00  4           9 16:00  0           0 17:00  0           0 18:00  0           0 19:00  0           0 20:00  0           0 21:00  0           0 22:00  0           0 23:00  0           0  
Hello   I'm using Splunk 9.1.1 . Like the title, I am looking for a way to check the cherrypy version of the Splunk. How can I check it? I look forward to hearing from you.
Hello community, I'm having a problem that's probably easy to solve, but I can't figure it out. I have a query that will query an index that contains alerts from Splunk OnCall. And I count each ale... See more...
Hello community, I'm having a problem that's probably easy to solve, but I can't figure it out. I have a query that will query an index that contains alerts from Splunk OnCall. And I count each alert source (via the associated routingkey from OnCall) and its status (Acknowledged or not). `victorops_incidents` | sort lastAlertTime desc | dedup incidentNumber | fields * | search org="*" routingKey=** pagedPolicies{}.policy.name!=0_Reroute_alertes currentPhase!=RESOLVED | eval currentPhase=case(like(currentPhase, "%UNACKED%"), "Non acquitté", like(currentPhase, "%ACKED%"), "En cours") | eval routingKey=case(like(routingKey, "%routingcontrol-m%"), "Control-M", like(routingKey, "%dyn%"), "Dynatrace", like(routingKey, "%centreon%"), "Centreon", like(routingKey, "%servicepilot%"), "ServicePilot", like(routingKey, "%p_1%"), "P1") | rename currentPhase as Etat, routingKey as Source | chart count by Etat, Source | sort - Etat I have an almost perfect table which summarizes everything but I am missing some information: I sometimes have a source which has not generated any alert so it is absent from the table (in the screen below, I have the sources "Control-M", "Dynatrace" and "ServicePilot" but I am missing "Centreon" because the latter did not have any incidents in the period of time) : My question is the following: how to make all the sources appear but display 0 when they have not had any alerts? Best regards, Rajaion
There is one tricky thing about this. A HF will work on its own perfectly well with the forwarder license. But if you have more than one HF and you want to add them to be monitored in the MC (some ... See more...
There is one tricky thing about this. A HF will work on its own perfectly well with the forwarder license. But if you have more than one HF and you want to add them to be monitored in the MC (some people do want to see their HFs in the MC as a "half-indexer" in order to see the queues and such), you might get alerts about duplicate use of the same license. In such case you need to add them to LM.
Also - you can "stack up" time specifiers. Look at this example:  
We've been through this exercise several times now.  Show us what you've learned.  What have you tried as a means of extracting fields from those events?  What were the results?  Have you at least id... See more...
We've been through this exercise several times now.  Show us what you've learned.  What have you tried as a means of extracting fields from those events?  What were the results?  Have you at least identified what fields must be extracted?
That's interesting (and I'm not being sarcastic here) because the docs as far back as 7.0 say the same - This refers to a binary or script in the bin folder of the app the alert action is define... See more...
That's interesting (and I'm not being sarcastic here) because the docs as far back as 7.0 say the same - This refers to a binary or script in the bin folder of the app the alert action is defined in, or to a path pointer file, also located in the bin folder.  
Hi all, i want to extract fields from event which is in json format INFO [processor: anchsdgeiskgcbc/5; event: 1-57d28402-9058-11ee-83b7-021a6f9d1f1c] : DETAILS: [ { "ERROR_MESSAGE": "\nError: ... See more...
Hi all, i want to extract fields from event which is in json format INFO [processor: anchsdgeiskgcbc/5; event: 1-57d28402-9058-11ee-83b7-021a6f9d1f1c] : DETAILS: [ { "ERROR_MESSAGE": "\nError: abchdvshsuaajs.\n", "NUMBER": "123r57", "DB_TIMESTAMP": "2023-11-30" }, { "ERROR_MESSAGE": "\nError: ehwegagsuabajehss.\n", "NUMBER": "63638w82u", "DB_TIMESTAMP": "2023-11-30" }, and similarly we have these error data in one event Fields to be extracted - ERROR_MESSAGE NUMBER DB_TIMESTAMP
hi, i built an app, when I run the app's action in a playbook, I dont have an option to get the data results. I used: action_result.add_data() but it didnt seem to make a difference, how can I so... See more...
hi, i built an app, when I run the app's action in a playbook, I dont have an option to get the data results. I used: action_result.add_data() but it didnt seem to make a difference, how can I solve it?
Generally, you should avoid using SHOULD_LINEMERGE=true whenever you can. In your case it seems  like something like this (along with SHOULD_LINEMERGE=false) should work LINE_BREAKER = ^REMARK[^\r\... See more...
Generally, you should avoid using SHOULD_LINEMERGE=true whenever you can. In your case it seems  like something like this (along with SHOULD_LINEMERGE=false) should work LINE_BREAKER = ^REMARK[^\r\n]+([\r\n]+)@ID
Regardless of actually rendering it in your dashboard, if you have dynamically created set of fields, you can use the foreach command. Like this (a run-anywhere example | makeresults | eval Agent1... See more...
Regardless of actually rendering it in your dashboard, if you have dynamically created set of fields, you can use the foreach command. Like this (a run-anywhere example | makeresults | eval Agent1=0,Agent2=1 | foreach "Agent*" [ eval <<FIELD>>=if (<<FIELD>>==1,"✓","x")] The downside of the foreach command is that it's tricky with spaces within field names.
While the blacklist format might not be compatible with the XML event format, that should not cause decrease of the number of events, quite the contrary. I'd check firstly whether your overall numbe... See more...
While the blacklist format might not be compatible with the XML event format, that should not cause decrease of the number of events, quite the contrary. I'd check firstly whether your overall number of events (not just bursts) indeed did decrease. In other words - are you indeed losing events or are are they by any chance getting "choked" but finally get through in shorter but higher-thruput bursts.  
It was circa 7.3 the last time I wrote one, but at the time, I don't think Splunk would run them from an app dir. I resorted to strace to confirm. I had a run-at-startup scripted input that would syn... See more...
It was circa 7.3 the last time I wrote one, but at the time, I don't think Splunk would run them from an app dir. I resorted to strace to confirm. I had a run-at-startup scripted input that would sync the alert action scripts to $SPLUNK_HOME/bin/scripts. Would I have this much fun without the workarounds?
No, wait. The timechart works with time automatically. You don't add "by DateOnly" because then it will treat your DateOnly field as a categorizing field. | timechart span=1h count by DateOnly Thi... See more...
No, wait. The timechart works with time automatically. You don't add "by DateOnly" because then it will treat your DateOnly field as a categorizing field. | timechart span=1h count by DateOnly This will count how many values _for each value of DateOnly field_ is per each span (in your case - per each hour). See this run-anywhere example: | makeresults count=2 | streamstats count | eval _time=_time-count*7200 | fields - count This will give you two timestamps - one two hours ago and one for hours ago. If you simply do | timechart span=1h count by hour You'll get a decent result showing you that for each of those hours you got one event. Which is OK. But if you do somehing akin to what you did before which means your whole example would look like this: | makeresults count=2 | streamstats count | eval _time=_time-count*7200 | fields - count | eval DateHour=strftime(_time,"%H") | timechart span=1h count by DateHour Your results will turn to this (I ran this at 13:58): _time 09 11 2023-12-05 09:00 1 0 2023-12-05 11:00 0 1 Because within the 9-10 hour you have your DateHour of "09" and no encounters of value "11" there (hence the corresponding counts. And within the hour 11-12, you have 0 and 1 counts. So if you want to have your timechart with the time formatted properly, you don't add the "by DateTime" part. You simply do | timechart span=1h count And only _then_ you format your time to the way you want to display it. For example | fieldformat _time=strfile(_time,"%H")