All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks however I worked out what was causing the issue. There was another app which was supposed to be deployed to the Search Head Custer but mistakenly it was deployed to the Indexer Cluster. After ... See more...
Thanks however I worked out what was causing the issue. There was another app which was supposed to be deployed to the Search Head Custer but mistakenly it was deployed to the Indexer Cluster. After I removed this app from the Master Apps Folder I redeployed the new one and it successfully validated and pushed down to the Indexer nodes.
Dear splunkers, I need to ingest some apaches log files. Those log files are first sent to a syslog server by rsyslog rsyslog adds to each line of the log file its owns information. A UF... See more...
Dear splunkers, I need to ingest some apaches log files. Those log files are first sent to a syslog server by rsyslog rsyslog adds to each line of the log file its owns information. A UF is installed on this syslog server and can monitor the log file and send them to the indexers Each line of the log file looks like this :   2024-02-16T00:00:00.129824+01:00 website-webserver /var/log/apache2/website/access.log 10.0.0.1 - - [16/Feb/2024:00:00:00 +0100] "GET /" 200 10701 "-" "-" 228   As you can see, the first part of the log, until "/access.log " had been added by rsyslog, so this is something I want Splunk to filter out / delete. So far, I'm able to monitor the file and filter out the rsyslog layer of the events with a parameter, and I added TIME_PREFIX parameter, then Splunk automatically detects the timestamp. Like this :   SEDCMD-1=s/^.*\.log //g TIME_PREFIX=- - \[   I created a custom sourcetype accordingly. But the issue is that, the field extraction is not working properly. Almost no field beside the _time related fileds is being extracted. I guess it's because I'm using a custom sourcetype, so Splunk is not extracting the fields automaticaly as it should; But I'm not really sure... I'm a bit lost Thanks a lot for your kind help
Try removing it from your initial filter index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/revpro-to-oracle/onDemand") OR (message="API: START: /v1/fin_Zuora_GL_Revp... See more...
Try removing it from your initial filter index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/revpro-to-oracle/onDemand") OR (message="API: START: /v1/fin_Zuora_GL_Revpro_JournalImport") OR (message="API: START: /v1/revproGLImport/onDemand*")
Assuming you want average duration from all events, you could do something like this | bin _time span=30m | eventstats count by _time method | appendpipe [| eventstats sum(duration) as count by ... See more...
Assuming you want average duration from all events, you could do something like this | bin _time span=30m | eventstats count by _time method | appendpipe [| eventstats sum(duration) as count by _time | eval method="duration"] | xyseries _time method count | addtotals fieldname=total | eval total=total-duration | eval average=duration/total | fields - duration total Using dummy data, gives something like this  
Hi, Thanks so much for the comment. I'm working on ES 7.2 this thing seems to still be missing. I will update the ES app soon so I will have this functionality back.
Hi Guys, I am try to exclude field value . need to exclude message=""API:START: /v1/Journals_outbound"    index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/Journa... See more...
Hi Guys, I am try to exclude field value . need to exclude message=""API:START: /v1/Journals_outbound"    index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/Journals_outbound") OR (message="API: START: /v1/revpro-to-oracle/onDemand") OR (message="API: START: /v1/fin_Zuora_GL_Revpro_JournalImport") OR (message="API: START: /v1/revproGLImport/onDemand*") | search NOT message IN ("API: START: /v1/Journals_outbound")    
Hi, I'm trying to setup this AP4S App in our nonprod environment and it seems that this will be beneficial to our Splunk admins.  Just wanted to check about the error that we're seeing in SH-14, th... See more...
Hi, I'm trying to setup this AP4S App in our nonprod environment and it seems that this will be beneficial to our Splunk admins.  Just wanted to check about the error that we're seeing in SH-14, the dashboard about KO changes. Particularly in Panel 2 - List. The 'ia4s_ko_changes_csv_lookup' lookup file doesn't exist.  I've checked the corresponding job, seems fine to me. Maybe I'm missing something. But I noticed that it's corresponding search101 "IA4S-013" works fine. The query is different with what's in SH-14 though so I'm not really sure. Please advise.    
This was indeed the issue. Thank you so much for the help.
Hi,   Not sure if this is what wou want, but is this not already an option in the Incident Review Settings page? When I enable this I am required to set a disposition other than the default of "und... See more...
Hi,   Not sure if this is what wou want, but is this not already an option in the Incident Review Settings page? When I enable this I am required to set a disposition other than the default of "undetermined". ** This is in Splunk ES 7.3.0 and it should have been added in ES 7.2    
of course, the chart is correct , explanation is bad  - my mistake so the first series - no doubt timechart count by kmethod the second one of course sum/avg numbers timechart avg(duration)   A... See more...
of course, the chart is correct , explanation is bad  - my mistake so the first series - no doubt timechart count by kmethod the second one of course sum/avg numbers timechart avg(duration)   All data comes from access.log which format is something like : TIMESTAMP;IP;HTTP_METHOD;METHOD;RETURN_CODE;DURATION;BYTES;UUID                
Hi all. I am ingesting data into Splunk Enterprise from a file. This file contains a lot of information, and I would like Splunk to make the events start on the ##start_string and end on the line b... See more...
Hi all. I am ingesting data into Splunk Enterprise from a file. This file contains a lot of information, and I would like Splunk to make the events start on the ##start_string and end on the line before the next occurrence ##end_string Within these blocks there are different fields with the form-> ##key = value Here is an example of the file:   ….. ##start_string ##Field = 1 ##Field2 = 12 ##Field3 = 1 ##Field4 = ##end_string ....... ##start_string ##Field = 22 ##Field2 = 12 ##Field3 = field_value ##Field4 = ##Field8 = 1 ##Field7 = 12 ##Field6 = 1 ##Field5 = ##end_string …… I have tried to create this sourcetype (with different regular expressions) but it creates only one event with all the lines: DATETIME_CONFIG = LINE_BREAKER = ([\n\r]+)##start_string ##LINE_BREAKER = ([\n\r]+##start_string\s+(?<block>.*?)\s+## end_string NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom description = Format custom logs pulldown_type = 1 disabled = false How should I approach this case? Any ideas or help would be welcome Thanks in advanced
Hi  Can some one help me with the following questions 1) My current setup is in on-premise and i plan to migrate to splunk cloud ,what things should i know ? I dont want historical data to be tr... See more...
Hi  Can some one help me with the following questions 1) My current setup is in on-premise and i plan to migrate to splunk cloud ,what things should i know ? I dont want historical data to be transfered to cloud .? 2) Suppose i have 1000 UF and 5 syslog servers , how should i be sending this data ?  3) Should i install the  Splunk credential package on all of these 1000 + 5 machines or should i deploy a HF before then send it to splunk cloud ? 4) Is there any encryption and compression of data that i have to do before sending to cloud or is it taken care by splunk ?
Still not clear - from your chart, it appears that kmethod is a string (jura_... etc). How do you then either sum these strings or take an average?
Hi @gcusello, Thanks for the response  I'm talking about splunk ES correlation search. I put the splunk notable priority as high but when it is generating ticket at servicenow it is coming as P3, n... See more...
Hi @gcusello, Thanks for the response  I'm talking about splunk ES correlation search. I put the splunk notable priority as high but when it is generating ticket at servicenow it is coming as P3, not sure why. Can you please help me to fix this issue 
Hi, If I recall correctly at HEC token creation do not select any index , use  local/context/splunk_metadata.csv for that. I think that fixed it. Daniel
Can you plaese show me how to do it on my query? index=* | eval bytes=10+20 | table id.orig_h, id.resp_h, bytes Thanks!
| 나머지 splunk_server=로컬 개수=0 /services/saved/searches | 검색 비활성화=0 | 표 제목,검색,*   Splunk Web Search 쿼리를 사용하여 Json File을 사용하는 경우 Splunk Cloud의 경우 12개 규칙 Splunk Enterprise의 경우 8개 규칙을 확인했습니다. 내부 데이터 값이... See more...
| 나머지 splunk_server=로컬 개수=0 /services/saved/searches | 검색 비활성화=0 | 표 제목,검색,*   Splunk Web Search 쿼리를 사용하여 Json File을 사용하는 경우 Splunk Cloud의 경우 12개 규칙 Splunk Enterprise의 경우 8개 규칙을 확인했습니다. 내부 데이터 값이 없으면 Sentinel에서 규칙 구성이 존재하도록 하겠습니다. 위 쿼리문을 사용하여 확인하는 샘플 데이터가 있는지도 함께 문의드립니다.
Thanks @gcusello @PickleRick  for all the replies and tips and hints. It has been very helpful. In the end, I went with 1 data model with segregation done using source filtering.  Still fiddling ... See more...
Thanks @gcusello @PickleRick  for all the replies and tips and hints. It has been very helpful. In the end, I went with 1 data model with segregation done using source filtering.  Still fiddling with adding fields into data model but I am sure it will be a nice addition like to have extra info like indexes into the data model fields or during indexing. I wish I can mark both as solutions, but since I can only accept one, I will select Rick's as the reply gave the Eureka moment in which a single model doesn't impact the security roles (index selection) and subsequently made me switch to the single data model. But all in all, all the replies have made me learnt a lot. A big thank you.
Hi @pubuduhashan, if you're speaking of permit different access to  a dashboard or to an index to different roles , it's possible managing the grants to the single knwledge objects (dashboards, even... See more...
Hi @pubuduhashan, if you're speaking of permit different access to  a dashboard or to an index to different roles , it's possible managing the grants to the single knwledge objects (dashboards, eventtypes, etc...). If you want to give limited access to some fields extracted from the raw data in en index, you can give different grants to the field extractions but anyway the users with that role continue to have access to the raw data in the index and it isn't possible to restrict access to a part of an index because grants are managed for the full index. In this case, you should copy only the information for the restricted roles in a different summary index (you don't pay additional license for this) and use this data for the restricted roles, but it's long to implement. Ciao. Giuseppe
Hi @anandhalagaras1,please try this: [your_sourcetype] SEDCMD = s/password: ([^;]+);cpassword: ([^;]+);/password: (####);cpassword: (####);/gm that you can test at https://regex101.com/r/ppaFZc/1 ... See more...
Hi @anandhalagaras1,please try this: [your_sourcetype] SEDCMD = s/password: ([^;]+);cpassword: ([^;]+);/password: (####);cpassword: (####);/gm that you can test at https://regex101.com/r/ppaFZc/1 Ciao. Giuseppe