All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I'm trying to setup this AP4S App in our nonprod environment and it seems that this will be beneficial to our Splunk admins.  Just wanted to check about the error that we're seeing in SH-14, th... See more...
Hi, I'm trying to setup this AP4S App in our nonprod environment and it seems that this will be beneficial to our Splunk admins.  Just wanted to check about the error that we're seeing in SH-14, the dashboard about KO changes. Particularly in Panel 2 - List. The 'ia4s_ko_changes_csv_lookup' lookup file doesn't exist.  I've checked the corresponding job, seems fine to me. Maybe I'm missing something. But I noticed that it's corresponding search101 "IA4S-013" works fine. The query is different with what's in SH-14 though so I'm not really sure. Please advise.    
This was indeed the issue. Thank you so much for the help.
Hi,   Not sure if this is what wou want, but is this not already an option in the Incident Review Settings page? When I enable this I am required to set a disposition other than the default of "und... See more...
Hi,   Not sure if this is what wou want, but is this not already an option in the Incident Review Settings page? When I enable this I am required to set a disposition other than the default of "undetermined". ** This is in Splunk ES 7.3.0 and it should have been added in ES 7.2    
of course, the chart is correct , explanation is bad  - my mistake so the first series - no doubt timechart count by kmethod the second one of course sum/avg numbers timechart avg(duration)   A... See more...
of course, the chart is correct , explanation is bad  - my mistake so the first series - no doubt timechart count by kmethod the second one of course sum/avg numbers timechart avg(duration)   All data comes from access.log which format is something like : TIMESTAMP;IP;HTTP_METHOD;METHOD;RETURN_CODE;DURATION;BYTES;UUID                
Hi all. I am ingesting data into Splunk Enterprise from a file. This file contains a lot of information, and I would like Splunk to make the events start on the ##start_string and end on the line b... See more...
Hi all. I am ingesting data into Splunk Enterprise from a file. This file contains a lot of information, and I would like Splunk to make the events start on the ##start_string and end on the line before the next occurrence ##end_string Within these blocks there are different fields with the form-> ##key = value Here is an example of the file:   ….. ##start_string ##Field = 1 ##Field2 = 12 ##Field3 = 1 ##Field4 = ##end_string ....... ##start_string ##Field = 22 ##Field2 = 12 ##Field3 = field_value ##Field4 = ##Field8 = 1 ##Field7 = 12 ##Field6 = 1 ##Field5 = ##end_string …… I have tried to create this sourcetype (with different regular expressions) but it creates only one event with all the lines: DATETIME_CONFIG = LINE_BREAKER = ([\n\r]+)##start_string ##LINE_BREAKER = ([\n\r]+##start_string\s+(?<block>.*?)\s+## end_string NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom description = Format custom logs pulldown_type = 1 disabled = false How should I approach this case? Any ideas or help would be welcome Thanks in advanced
Hi  Can some one help me with the following questions 1) My current setup is in on-premise and i plan to migrate to splunk cloud ,what things should i know ? I dont want historical data to be tr... See more...
Hi  Can some one help me with the following questions 1) My current setup is in on-premise and i plan to migrate to splunk cloud ,what things should i know ? I dont want historical data to be transfered to cloud .? 2) Suppose i have 1000 UF and 5 syslog servers , how should i be sending this data ?  3) Should i install the  Splunk credential package on all of these 1000 + 5 machines or should i deploy a HF before then send it to splunk cloud ? 4) Is there any encryption and compression of data that i have to do before sending to cloud or is it taken care by splunk ?
Still not clear - from your chart, it appears that kmethod is a string (jura_... etc). How do you then either sum these strings or take an average?
Hi @gcusello, Thanks for the response  I'm talking about splunk ES correlation search. I put the splunk notable priority as high but when it is generating ticket at servicenow it is coming as P3, n... See more...
Hi @gcusello, Thanks for the response  I'm talking about splunk ES correlation search. I put the splunk notable priority as high but when it is generating ticket at servicenow it is coming as P3, not sure why. Can you please help me to fix this issue 
Hi, If I recall correctly at HEC token creation do not select any index , use  local/context/splunk_metadata.csv for that. I think that fixed it. Daniel
Can you plaese show me how to do it on my query? index=* | eval bytes=10+20 | table id.orig_h, id.resp_h, bytes Thanks!
| 나머지 splunk_server=로컬 개수=0 /services/saved/searches | 검색 비활성화=0 | 표 제목,검색,*   Splunk Web Search 쿼리를 사용하여 Json File을 사용하는 경우 Splunk Cloud의 경우 12개 규칙 Splunk Enterprise의 경우 8개 규칙을 확인했습니다. 내부 데이터 값이... See more...
| 나머지 splunk_server=로컬 개수=0 /services/saved/searches | 검색 비활성화=0 | 표 제목,검색,*   Splunk Web Search 쿼리를 사용하여 Json File을 사용하는 경우 Splunk Cloud의 경우 12개 규칙 Splunk Enterprise의 경우 8개 규칙을 확인했습니다. 내부 데이터 값이 없으면 Sentinel에서 규칙 구성이 존재하도록 하겠습니다. 위 쿼리문을 사용하여 확인하는 샘플 데이터가 있는지도 함께 문의드립니다.
Thanks @gcusello @PickleRick  for all the replies and tips and hints. It has been very helpful. In the end, I went with 1 data model with segregation done using source filtering.  Still fiddling ... See more...
Thanks @gcusello @PickleRick  for all the replies and tips and hints. It has been very helpful. In the end, I went with 1 data model with segregation done using source filtering.  Still fiddling with adding fields into data model but I am sure it will be a nice addition like to have extra info like indexes into the data model fields or during indexing. I wish I can mark both as solutions, but since I can only accept one, I will select Rick's as the reply gave the Eureka moment in which a single model doesn't impact the security roles (index selection) and subsequently made me switch to the single data model. But all in all, all the replies have made me learnt a lot. A big thank you.
Hi @pubuduhashan, if you're speaking of permit different access to  a dashboard or to an index to different roles , it's possible managing the grants to the single knwledge objects (dashboards, even... See more...
Hi @pubuduhashan, if you're speaking of permit different access to  a dashboard or to an index to different roles , it's possible managing the grants to the single knwledge objects (dashboards, eventtypes, etc...). If you want to give limited access to some fields extracted from the raw data in en index, you can give different grants to the field extractions but anyway the users with that role continue to have access to the raw data in the index and it isn't possible to restrict access to a part of an index because grants are managed for the full index. In this case, you should copy only the information for the restricted roles in a different summary index (you don't pay additional license for this) and use this data for the restricted roles, but it's long to implement. Ciao. Giuseppe
Hi @anandhalagaras1,please try this: [your_sourcetype] SEDCMD = s/password: ([^;]+);cpassword: ([^;]+);/password: (####);cpassword: (####);/gm that you can test at https://regex101.com/r/ppaFZc/1 ... See more...
Hi @anandhalagaras1,please try this: [your_sourcetype] SEDCMD = s/password: ([^;]+);cpassword: ([^;]+);/password: (####);cpassword: (####);/gm that you can test at https://regex101.com/r/ppaFZc/1 Ciao. Giuseppe
Hi @debjit_k , are you speaking of Splunk Support priority or Priority in ES Correlation searches? if Splunk Support, you define the Priority of your Cases when you oprn them. If you're speaking o... See more...
Hi @debjit_k , are you speaking of Splunk Support priority or Priority in ES Correlation searches? if Splunk Support, you define the Priority of your Cases when you oprn them. If you're speaking of Priority in ES Correlation Searches, it's assigned to the associated assets or identities and with the Severity of the Correlation Search it determines the Urgency of a Notable. Ciao. Giuseppe  
Hi Team, Want to mask two of the fields "password" and "cpassword" from the events which are getting written with the plain text information. So needs to be changed as #####. Sample event informati... See more...
Hi Team, Want to mask two of the fields "password" and "cpassword" from the events which are getting written with the plain text information. So needs to be changed as #####. Sample event information:   [2024-01-31_07:58:28] INFO : REQUEST: User:abc CreateUser POST: name: AB_Test_Max;email: xyz@gmail.com;password: abc12345679;cpassword: abc12345679;role: User; [2024-01-30_14:05:42] INFO : REQUEST: User:xyz CreateUser POST: name: Math_Lab;email: abc@yahoo.com;password: xyzab54;cpassword: xyzab54;role: Admin; So kindly help with the props.conf so that i can apply with SEDCMD-mask.
Hi All,   Just wanted to know we have splunk ES and we use servicenow to triggered alert now my question is if there are few alert which I want to give Priority as P2 how can I do it in splunk as s... See more...
Hi All,   Just wanted to know we have splunk ES and we use servicenow to triggered alert now my question is if there are few alert which I want to give Priority as P2 how can I do it in splunk as splunk by default priority is P3.   
When you say "in another application" what do you mean The predict command can be used to predict future trends https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Predict  ... See more...
When you say "in another application" what do you mean The predict command can be used to predict future trends https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Predict    
Try this - use your index and I assume that the event _time stamp is the login time. index=bla userID=text123 earliest=-5m@m latest=@m | stats dc(ip) as ips by userID | where ips>1 If your events c... See more...
Try this - use your index and I assume that the event _time stamp is the login time. index=bla userID=text123 earliest=-5m@m latest=@m | stats dc(ip) as ips by userID | where ips>1 If your events contain other info than just login details, then you may need to add login_time=* to the search
Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=1... See more...
Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=12.3.3.45 userID=text123, login_time="2024-03-21 08:02:12.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 07:02:42.201000", ip_addr=12.3.3.34   i want get data, userID="text123 " AND in the last 5 minutes AND if mutiple ip i used join,map,append but not solved.please help for SPL this