All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful log... See more...
I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful login after one or more failed login attempts. I currently have the following search. The transaction command is necessary where it is or otherwise, all the events are split up into separate events of varying line counts. index=honeypot sourcetype=honeypotLogs | transaction sessionID | search "SSH2_MSG_USERAUTH_FAILURE" OR "SSH2_MSG_USERAUTH_SUCCESS"  Below is an example event. For clarity, I replaced details/omitted details from the logs below. [02] Tue 27Aug24 15:20:57 - (143323) Connected to 1.2.3.4 ... ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_FAILURE ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [02] Tue 27Aug24 15:20:57 - (143323) User "bob" logged in [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_SUCCESS: successful login Any tips on getting my search to find events like this? Currently I only have field extractions for the IP (1.2.3.4), user (bob), and sessionID (143323). I can possibly create a field extraction for the SSH2 messages but I don't know if that will help or not. Thanks!
In the Azure Splunk Enterprise Application, under Users and Groups, I add my Azure security groups that have the members that want access inside them. Then you go into the Single sign-on portion and ... See more...
In the Azure Splunk Enterprise Application, under Users and Groups, I add my Azure security groups that have the members that want access inside them. Then you go into the Single sign-on portion and review the Attributes and Claims. You should have an entry that says "groups" as the claim name. Under the value, it should be set to "Groups assigned to the application" and Source Attribute as "Group ID". This will send just the groups assigned to the application that the sign-in user is a part of instead of every single group which might go over the limit. In the splunk side, you enter your AD group name or sometimes with other versions it has to be the object-id of the azure group and then map it to the correct internal splunk role you have created for that team.  
1) What is the difference between using "| search ip=" and "ip="?   They give the same outcome The idea that adding filter in index search improves performance is a general recommendation based ... See more...
1) What is the difference between using "| search ip=" and "ip="?   They give the same outcome The idea that adding filter in index search improves performance is a general recommendation based on the assumption that the bottleneck is number of raw events.  This may not be the case.  There could be another case where index search filter works better when field-based filter applies to index-time extracted fields.  I did observe that in some of my searches index time filter slows search down rather than speeds it up.  I have yet to conduct systematic research but if it doesn't speed up for you, use what works better. 2) Sorry about not mentioning dedup. Because dedup will remove any rows that have empty/null fields, so I put the dedup after join and adding "fillnull" command If I move it to each subsearch, I would need to add fillnull command for each subsearch and it's probably adding a delay.  What do you think? dedup has an option keepemtpy that you can try | dedup keepempty=true ip, risk, score, contact In some of my use cases, keeping all events that has any empty field is a bit too much.  In that case, you can do fillnull before dedup provided that you don't care to print those rows with empty risk, score, or contact.  Something like | inputlookup host.csv | rename ip_address as ip | join max=0 type=left ip [ search index=risk company IN (compA, compB) | fields ip risk score contact | fillnull risk score contact value=UNSPEC | dedup ip risk score contact ] | foreach risk score contact [eval <<FIELD>> = if(<<FIELD>> == "UNSPEC", null(), <<FIELD>>)] | table ip host risk score contact You can also apply the same technique with split subsearches.  Again, I do not know your data characteristics.  So, whether dedup does any good is for you to find out.
If I can't figure it out, I'll try the simple dashboard.
I tried this but it would not work.  eventtype=builder (user_id IN ($id$) OR user_mail in $email$) | eval .....   I also tried eventtype=builder ((user_id IN ($id$) OR (user_mail IN ($email$))) | ... See more...
I tried this but it would not work.  eventtype=builder (user_id IN ($id$) OR user_mail in $email$) | eval .....   I also tried eventtype=builder ((user_id IN ($id$) OR (user_mail IN ($email$))) | eval ... but that only works if both tokens are populated.
Thanks for the advice!    We´re experiencing the same issues on same RHEL (8.10) Will also check on our test env if this will help   Also interessted in updates, if someone find out something ... See more...
Thanks for the advice!    We´re experiencing the same issues on same RHEL (8.10) Will also check on our test env if this will help   Also interessted in updates, if someone find out something     Regards,  Tobias
Thanks! This was exactly the fix I found. 
Modified Query ============   | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1... See more...
Modified Query ============   | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1
Original Query ============ | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",su... See more...
Original Query ============ | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response
Never mind. I posted too soon. I replaced "| addcoltotals label=Total " with "| addcoltotals labelfield="Vision ID" label="Total"" and it worked. Thanks.
Use the labelfield option.  It tells Splunk into which field (column) to put the total. ... | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Aver... See more...
Use the labelfield option.  It tells Splunk into which field (column) to put the total. ... | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" | rename bank_fiid as "Vision ID" | addcoltotals label=Total labelfield=bank_fiid ...  
i am trying to exclude non-business hours & weekends in my mstats query. Original Query: | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visually... See more...
i am trying to exclude non-business hours & weekends in my mstats query. Original Query: | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response Modified this query like below and not getting any results | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1 can anyone please help me to achieve this? Thanks in advance.
Issue is resolved. I just have to tell Splunk that the time you receive is in UTC and it worked. In props.conf, I just add this: TZ=UTC   @richgalloway  and @PickleRick  appreciate the effort
Thank you, I will investigate this as well to see what works best.
Thank you, I will investigate this.
Here is my current query. I either get the Totals label in the last column or not at all. I need it to show in the first column at the beginning of the Totals row. Any help is greatly appreciated. Th... See more...
Here is my current query. I either get the Totals label in the last column or not at all. I need it to show in the first column at the beginning of the Totals row. Any help is greatly appreciated. Thanks.  index=etims_na sourcetype=etims_prod platformId=5 bank_fiid=CHUA | eval response_time=round(if(strftime(_time,"%Z") == "EDT",((j_timestamp-entry_timestamp)-14400000000)/1000000,((j_timestamp-entry_timestamp)-14400000000)/1000000-3600),3) | stats count AS Transactions count(eval(response_time <= 1)) AS "Good" count(eval(response_time <= 2)) AS "Fair" count(eval(response_time > 2)) AS "Unacceptable" avg(response_time) AS "Average" BY bank_fiid | eval "%Good"=(Good/Transactions)*100 | eval "%Fair"=(Fair/Transactions)*100 | eval "%Unacceptable"=(Unacceptable/Transactions)*100 | addinfo | eval "Report Date"=strftime(info_min_time, "%m/%Y") | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" | rename bank_fiid as "Vision ID" | addcoltotals label=Total | append [|makeresults | eval "Vision ID"="Threshold" | eval "Good" = "response <= 1s" | eval "Fair" = "1s < response <= 3s" | eval "Unacceptable" = "3s < response"] | fields - _time
@ASierra  -  We have run into the security group count issue. Currently, we map security groups via the 'SAML Groups' page, where the security group I maps to a specific role. @jpondrom_splunk offe... See more...
@ASierra  -  We have run into the security group count issue. Currently, we map security groups via the 'SAML Groups' page, where the security group I maps to a specific role. @jpondrom_splunk offers a solution that would require adding the app roles from Azure into the 'SAML Groups', then modify the 'SAML Configuration' 'Role Alias' to a different value. This would retool all of the SAML authentication from 'groups' to 'roles'. Clarifying question(s) on step 1: Did you mean 'Add the Azure App Roles to the SAML Groups'? Or is it the case that having mapped Azure Security Groups to Splunk Roles via the 'SAML Groups' interface, I don't need to do anything else?  It would be wonderful if a simple Azure change could provide the fix. I'm not averse to adding the Azure App Roles to the SAML Groups mapping, but the method you offer is much easier.
Of course. With a single disk installation a single disk also takes down whole machine But seriously - it's just a matter of risk and cost management. Some users can accept the risk of the whole ... See more...
Of course. With a single disk installation a single disk also takes down whole machine But seriously - it's just a matter of risk and cost management. Some users can accept the risk of the whole machine going down knowing that the machine is cheaper (and possibly a bit faster). But I agree, the storage is relatively cheap nowadays. One important caveat: I'm of course talking about components which are replicated (indexers, search heads). You probably don't want a RAID0-based machine as CM.
Hi @Anamika.David, Did you find a solution or any new information you can share about your question?
Hi @Alex_Rus , if the disk is always mounted with the same name, you can put it in your inputs.conf: [monitor://E:\my_foler\my_files.log] Ciao. Giuseppe