All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in ... See more...
Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in splunkd.log, inputs & outputs config are in place, there are no space issues as well. What could be the possible reason for this if anyone can help me? All our indexers & SH's are hosted in Splunk cloud.
anyone still facing this issue? is there any fix for this
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert ... See more...
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert if the log numbers drop by 30% or more in any period. I've seen this done with search and with _index, but I'm unsure which way is best. I don't want to build almost 100 searches for 100 different source types, and I'd much rather do it by the twenty-something indexes. I'm not sure if ML is the right way to do this but I've seen times when logs stop flowing and it isn't noticed for days and I want to prevent that from happening. Any help is appreciated. 
I am using Splunk Enterprise Version: 9.1.0.1. my search query is : index="webmethods_prd" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" InterfaceName=USCUS... See more...
I am using Splunk Enterprise Version: 9.1.0.1. my search query is : index="webmethods_prd" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" InterfaceName=USCUSTOMERPO Status=Success OR Status=Failure | eval timestamp=strftime(_time, "%F")|chart limit=30 dc(TxID) over Sender_ID by timestamp in result I am getting incomplete Sender_ID, splunk removed space from Sender_ID but actually it should be full name , like this : How can I preserve the full Sender_ID here?   Avik
You can do it, but it's a little fiddly - it involves making the response value a multivalue field with the response value as the first element and an indicator whether it's over threshold saved as t... See more...
You can do it, but it's a little fiddly - it involves making the response value a multivalue field with the response value as the first element and an indicator whether it's over threshold saved as the second value. Using CSS you stop the second value from being displayed, then use the standard colorPalette settings to set the colour. Here's an example panel that demonstrates how <panel> <html depends="$hidden$"> <style> #coloured_cell table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> <table id="coloured_cell"> <title>Colouring a table cell based on it's relative comparison to another cell</title> <search> <query> | makeresults count=10 | fields - _time | eval Threshold=random() % 100, Value=random() % 100 | eval Comparison=case(Value &lt; Threshold,-1, Value &gt; Threshold, 1, true(), 0) | eval Value=mvappend(Value, Comparison) | table Threshold Value </query> <earliest>-15m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <format type="color" field="Value"> <colorPalette type="expression">case(mvindex(value, 1) == "1", "#ff0000", mvindex(value, 1) == "-1", "#000000", true(), "#00ff00")</colorPalette> </format> </table> </panel>  It will make a field called Comparison which is -1 if the Value is < Threshold, =1 if Value is > Threshold otherwise 0. This is appended to the real value field Then the <colorPalette> statement will test the mvindex(..1) which gives the Comparison number and the expression defines the colour.
What's your search that's returning the data - is this in a dashboard. If the data is coming from a KV store than you will have to do the time bounding yourself as Splunk will not limit your result s... See more...
What's your search that's returning the data - is this in a dashboard. If the data is coming from a KV store than you will have to do the time bounding yourself as Splunk will not limit your result set based on time as there is no similar time concept in the KV store, you have to do it yourself.  
That looks ok, so it means your field called SERVERNAME is not exactly matching those strings. The in() eval function is an exact match. If you just do index=some_index | table SERVERNAME Do you se... See more...
That looks ok, so it means your field called SERVERNAME is not exactly matching those strings. The in() eval function is an exact match. If you just do index=some_index | table SERVERNAME Do you see exactly those strings? If it's an upper/lower case think, you can do ... in(lower(SERVERNAME),"servername1"...
Thanks for your reply. but if the statement is not returning only a specific event user name.  here are two events. group.user_membership.add Event {"actor": {"id": "spr1g8od2gOPLTfra4h7", "type": ... See more...
Thanks for your reply. but if the statement is not returning only a specific event user name.  here are two events. group.user_membership.add Event {"actor": {"id": "spr1g8od2gOPLTfra4h7", "type": "SystemPrincipal", "alternateId": "system@okta.com", "displayName": "Okta System", "detailEntry": null}, "client": {"userAgent": null, "zone": null, "device": null, "id": null, "ipAddress": null, "geographicalContext": null}, "device": null, "authenticationContext": {"authenticationProvider": null, "credentialProvider": null, "credentialType": null, "issuer": null, "interface": null, "authenticationStep": 0, "externalSessionId": "trs-tF3wuwOTRiKM_BZirBk9A"}, "displayMessage": "Add user to group membership", "eventType": "group.user_membership.add", "outcome": {"result": "SUCCESS", "reason": null}, "published": "2024-02-20T15:40:04.384Z", "securityContext": {"asNumber": null, "asOrg": null, "isp": null, "domain": null, "isProxy": null}, "severity": "INFO", "debugContext": {"debugData": {"triggeredByGroupRuleId": "0pr7fprux4jw2hORP4h7"}}, "legacyEventType": "core.user_group_member.user_add", "transaction": {"type": "JOB", "id": "cpb7g4ndq8ZaAR5S14h7", "detail": {}}, "uuid": "5115faa0-d006-11ee-84e8-0b1ac5c0434f", "version": "0", "request": {"ipChain": []}, "target": [{"id": "00u7g4ndmhZ2j2J1i4h7", "type": "User", "alternateId": "USER-EMAIL", "displayName": "USER-NAME", "detailEntry": null}, {"id": "00g7fpoiohiAF2JrY4h7", "type": "UserGroup", "alternateId": "unknown", "displayName": "GROUP-NAME", "detailEntry": null}]} user.authentication.sso Event {"actor": {"id": "00u1p2k8w5CVuKgeq4h7", "type": "User", "alternateId": "USER-EMAIL", "displayName": "USER-NAME", "detailEntry": null}, "device": null, "authenticationContext": {"authenticationProvider": null, "credentialProvider": null, "credentialType": null, "issuer": null, "interface": null, "authenticationStep": 0}, "displayMessage": "User single sign on to app", "eventType": "user.authentication.sso", "outcome": {"result": "SUCCESS", "reason": null}, "published": "2024-02-20T22:25:18.552Z", "signOnMode": "OpenID Connect",}, "target": [{"id": "0oa2n26twxcr3lNWO4h7", "type": "AppInstance", "alternateId": "APPLICATION-NAME": "OpenID Connect Client", "detailEntry": {"signOnModeType": "OPENID_CONNECT"}}, {"id": "0ua2n4im21IccI2Eh4h7", "type": "AppUser", "alternateId": "USER-EMAIL, "displayName": "USER-NAME, "detailEntry": null}]}   And my query Index= "IndexName"(eventType="group.user_membership.add" OR eventType="user.authentication.sso") | rename "target{}.alternateId" AS "targetId" |rename "target{}.type" AS "targetType" | eval User=if(eventType="group.user_membership.add",mvindex(targetId, mvfind(targetType, "User")),"SSO User") |spath "target{}.displayName" |rename target{}.displayName as grpID| eval groupName=mvindex(grpID, 1) | table User groupName | where eventType="user.authentication.sso"   What I'm looking is grab the user name and group name for  the  eventType="group.user_membership.add only , this event type will tell me when the user is added to the particular group then search the User name in the eventType="user.authentication.sso and display the result as group name and user name. Basically I want to get the list of users by group name started using authentication service. Thanks again for your time.   
Yep. +1 on that. HEC does skip some parts of the pipeline (line breaking, often timestamp recognition) but the index-time extractions and evals are applied normally.
Yes, that type of table can be done with chart, so ... | chart count over PF by servername what that won't do is distinguish between which source it came from, which may or may not be relevant to y... See more...
Yes, that type of table can be done with chart, so ... | chart count over PF by servername what that won't do is distinguish between which source it came from, which may or may not be relevant to your use case. Do you care if the count is combined between source 1 and source 2?  
You can generally do this type of logic directly in the stats through eval statements and in eval categorisation in the logic prior to the stats for clarity, so to expand on the 'type' logic I used i... See more...
You can generally do this type of logic directly in the stats through eval statements and in eval categorisation in the logic prior to the stats for clarity, so to expand on the 'type' logic I used in the earlier example, something like this | eval type=case(match(message,"created"),1, match(message,"disconnected"),2, match(message,"other_message"),3) | stats count(eval(if(type<3,_time,null()))) as connection_count count(eval(if(type=3,_time,null()))) as message_count values(type) as types min(eval(if(type<3,_time,null()))) as first_event_time range(eval(if(type<3,_time,null()))) as duration by userId, traceId | addinfo ``` Handle created but no disconnect ``` | eval duration=if(duration=0 AND connection_count=1 AND types=1, info_max_time - first_event_time, duration) ``` Handle disconnect but no created ``` | eval duration=if(duration=0 AND connection_count=1 AND types=2, first_event_time - info_min_time, duration) | stats values(message_count) as message_count sum(duration) as duration by userId so the type is 1-3 depending on text you want to match and then the count eval statements in the stats count the event types and the time calculations exclude the type=3 This is untested, but hopefully you get the picture. There are probably some optimisations there, but it should do what you need
Hi as transforms are handled on typing processor based on this picture https://www.aplura.com/assets/pdf/hec_pipelines.pdf it’s doable. r. Ismo
On Linux you could use command “ss -napt” to see what processes are running an in which port.
Hello,  Please, in Splunk Enterprise, I would like to know if it is possible to apply an INGEST_EVAL processing at indexer layer for data that is coming to indexer  from a HEC (http event collector)... See more...
Hello,  Please, in Splunk Enterprise, I would like to know if it is possible to apply an INGEST_EVAL processing at indexer layer for data that is coming to indexer  from a HEC (http event collector). Thanks
Thanks, but I haven't quite got it.  The query is accepted, but the PODNAME is not being set (everything is under DANG).  index=some_index | eval PODNAME=case(in(SERVERNAME, "servername1", "serv... See more...
Thanks, but I haven't quite got it.  The query is accepted, but the PODNAME is not being set (everything is under DANG).  index=some_index | eval PODNAME=case(in(SERVERNAME, "servername1", "servername2", "servername3"), "ONTARIO", in(SERVERNAME, "servername4", "servername5", "servername6"), "GEORGIA", 1==1, "DANG" ) | timechart span=10min count by PODNAME
Thanks a lot @bowesmana, this helped me. There are some message sent events also there in b/w these connect and disconnect (which I need to calculate how many messages sent for a userId), is there a... See more...
Thanks a lot @bowesmana, this helped me. There are some message sent events also there in b/w these connect and disconnect (which I need to calculate how many messages sent for a userId), is there any way to exclude these message sent while calculating  min(_time), in case where no connect was available in logs.
1. Neat idea. You could adjust that to "scale" from x.00 to x.59. 2. With locale using comma for decimal point it will look worse (but I don't remember if Splunk can do comma).
To be precise, I didn't suggest using evals. This eval is already defined within the TA and it's the reason why the field is empty.
The question is whether you really want LB or do you mistake it with HA. And receiving syslogs directly on Splunk box regardless of if it's UF or HF is not a great idea.
Did you ever find an answer to this? I have the same question.