All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are using the latest version of the Splunk App for Jenkins and we have configured it to use our own index. The drop-down filters are all populating correctly, however, the search panels are all us... See more...
We are using the latest version of the Splunk App for Jenkins and we have configured it to use our own index. The drop-down filters are all populating correctly, however, the search panels are all using the default jenkins indexes. Has anyone encountered this before? I've looked at the configuration files and I see where the indexes we set have been added to macros in local/macros.conf, but I don't see anywhere else that they have been set so I would assume that the panels should be using the same macros. If this app were using standard dashboards and panels we could just override the search that they are using, but it uses javascript to build and execute these searches so I'm at a loss for what we could do to resolve this on our own.
Hi everyone, I have some questions about skipped searches. With the following search, I have found, that on my SH I have a few (2800 last 7 days) skipped searches.    index = _internal skipped sour... See more...
Hi everyone, I have some questions about skipped searches. With the following search, I have found, that on my SH I have a few (2800 last 7 days) skipped searches.    index = _internal skipped sourcetype=scheduler status=skipped | stats count by app search_type reason savedsearch_name | sort -count   I have made other searches with show me all saved searches and their scheduled cronjob. I have found, that I have more than 70 searches that are running every 5 minutes and a few are running every minute.  Would that be my issue with the skipped searches, even they are running for just a few seconds (max 5 seconds). On all 70 scheduled searches is the parameter schedule_window set to 0.
Hi all, I have a Correlation Search that generates notable events ignoring the throttling configuration. The search is "Excessive Logins Failed" and is set with the current parameters: Cron schedu... See more...
Hi all, I have a Correlation Search that generates notable events ignoring the throttling configuration. The search is "Excessive Logins Failed" and is set with the current parameters: Cron schedule: */20 * * * * Time range: from '-65m' to 'now' Scheduling: continuous Schedule Window: No Window Scheduling priority: Default Trigger condition: number of results > 0 Throttling window: 86400 seconds Throttling fields to group by: src The search is the following:   | tstats summariesonly=true allow_old_summaries=true dc(Authentication.user) as "user_count",dc(Authentication.dest) as "dest_count",count from datamodel="Authentication"."Authentication" where Authentication.user!=*$ nodename="Authentication.Failed_Authentication" by "Authentication.app","Authentication.src" | `drop_dm_object_name(Authentication)` | replace "::ffff:*" with "*" in src | where count>=500   Search runtime is very short (few seconds), so I'm sure there are no overlapping searches at the same time. Nevertheless, I often find notable events generated for the same 'src' in the last 24 hours. I also have another Correlation Search (Brute Force Attacks detection) which have similar configuration/scheduling but in this case the throttling is working fine. Can anyone help me with this? Anybody else having the same issue? Thanks in advance
Hello, I would like to exclude just one user from forwarding logs and I am thinking if my solution will work: in inputs.conf I would like to define: [monitor:///home/nessus/.bash_history] disable... See more...
Hello, I would like to exclude just one user from forwarding logs and I am thinking if my solution will work: in inputs.conf I would like to define: [monitor:///home/nessus/.bash_history] disabled = true [monitor:///home/*/.bash_history] disabled = false The goal is to exclude logging data from user nessus but to log everybody else. I am not sure if it's a good solution, maybe someone has better idea? 
Hi Team, I have the following requirement - I have a report that needs to be scheduled to be run every 10 minutes. The catch is, I want the first search of the day to be run at 00:10AM and after th... See more...
Hi Team, I have the following requirement - I have a report that needs to be scheduled to be run every 10 minutes. The catch is, I want the first search of the day to be run at 00:10AM and after that it should run every 10 minutes. I am implementing the report in 'Search and reporting' app. Thanks in advance.  
Hi team,    How can I get the value of 'status' from below payload in Splunk search. {"log":" \"status\" : \"END\",","payload":"stdout","time":"2021-08-13T11:54:17.255787345Z"}   Thanks in Advan... See more...
Hi team,    How can I get the value of 'status' from below payload in Splunk search. {"log":" \"status\" : \"END\",","payload":"stdout","time":"2021-08-13T11:54:17.255787345Z"}   Thanks in Advance.
Hi Experts,   I have created a search query to fetch details from Linux log and extracted a timestamp field and converted that with command strftime. Timestamp from Linux log: 1628674387976621 | ... See more...
Hi Experts,   I have created a search query to fetch details from Linux log and extracted a timestamp field and converted that with command strftime. Timestamp from Linux log: 1628674387976621 | eval CT_time=strftime(Start_Time/pow(10,6),"%d/%m/%Y %H:%M:%S")  Now I would like to filter the events based on converted time, like From CT_time to CT_time.   Please help with a query to filter with converted timestamp.   Regards, Karthikeyan.SV
I'm reading the docs about sharing summaries between search-heads and I'm a bit puzzled. https://docs.splunk.com/Documentation/Splunk/8.2.1/Knowledge/Sharedatamodelsummaries The article states: "Yo... See more...
I'm reading the docs about sharing summaries between search-heads and I'm a bit puzzled. https://docs.splunk.com/Documentation/Splunk/8.2.1/Knowledge/Sharedatamodelsummaries The article states: "You can find the GUID for a search head cluster in the [shclustering] stanza of server.conf. If you are running a single instance you can find the GUID in etc/instance.cfg. "  But in my case the only guids I can find are those on single shcluster members in etc/instance.cfg. Of course each one is different. I cannot seem to find a "search head cluster GUID" anywhere. What am I doing wrong?
HI, I am using the below query to calculate the percentage value for available and total columns.     index=nextgen mango_trace="SyntheticTitan*" | where status = "200" OR status = "204"|stats co... See more...
HI, I am using the below query to calculate the percentage value for available and total columns.     index=nextgen mango_trace="SyntheticTitan*" | where status = "200" OR status = "204"|stats count as available by service | appendcols [search index=nextgen mango_trace="SyntheticTitan*" | stats count as total by service] | eval percentage = round((available/total)*100,2) |table service, percentage, available, total     I wanted to trigger an alert when the percentage values are less than 100.00. My Splunk search results for the above query looks like Can you please help me with the trigger conditions to set an alert of any of the service percentages is < than 100.00   Thanks, SG
My Splunk alerts are configured to send an e-mail when triggered. How do I make sure that Splunk only sends one e-mail per violation? It seems to be sending multiple emails everytime for same violati... See more...
My Splunk alerts are configured to send an e-mail when triggered. How do I make sure that Splunk only sends one e-mail per violation? It seems to be sending multiple emails everytime for same violation.  Settings are as follows Run on cron schedule time range: -24h cron: 42 * * * * trigger when number of results is >0 trigger : Once throttle : 60s
Hi everyone, I just wanted to do a quick search in URLs requested in Splunk but cannot get the directory traversal string  (../../../../ o similar) to stick - it gets stripped from the query.  I've ... See more...
Hi everyone, I just wanted to do a quick search in URLs requested in Splunk but cannot get the directory traversal string  (../../../../ o similar) to stick - it gets stripped from the query.  I've tried using quotes and it seems escaping shouldn't be necessary.   Any suggestions? Thanks  
Hi Team. I have an alert with throttle value defined, for example 4 hours. If the alert is generated at 4 am, subsequent alerts are suppressed until 8 am. However, I need to generate the alert at 6 ... See more...
Hi Team. I have an alert with throttle value defined, for example 4 hours. If the alert is generated at 4 am, subsequent alerts are suppressed until 8 am. However, I need to generate the alert at 6 am if the alert condition is met no matter if we are in the throttle period or not. The reason is that working hour start at 6 am, we have a hotline active and we need to make sure that everything is up and running. The hotline does not know about the alert generated at 4 am.
Hi All, I have the below sample events in my log data i.e. in UTC format , i want Splunk to change the event time to AEST time. I Assume Splunk would definitely convert in to AEST format since the c... See more...
Hi All, I have the below sample events in my log data i.e. in UTC format , i want Splunk to change the event time to AEST time. I Assume Splunk would definitely convert in to AEST format since the cloud we use for Australian project/region.   My Sample Data looks like below in UTC format - 2021-08-11T01:16:25.373937Z I-6083-EP S< : icexsTrace-icexs5-20210811-1116-037.trc64:0000298 | X 8 NRRS202108111116250196534269 N ack_nak_response=ack 2021-08-11T01:16:25.381943Z I-6016-EP R> : icexsTrace-icexs5-20210811-1116-037.trc64:0000314 | 8 MH18000000000000000731127354 P AMQ LUXP112 , ` * MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 CANONICAL CODE 736062787787 2021-08-11T01:16:25.381991Z E-6016-EP S> : icexsTrace-icexs5-20210811-1116-037.trc64:0000323 | _ *SAMPL1* SW051001 MHS18P1 SWLP1 ZP11SIV HXU4P73A MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 CANONICAL CODE 736062787787 2021-08-11T01:16:25.422824Z E-6016-EP R< : icexsTrace-icexs5-20210811-1116-037.trc64:0000392 | ' MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 00CANONICAL CODE 736062787787 001000000000879575CR000000000879575CRAUD00000000000000000000000000000013d46777ec304eadb673f30ed0487f99 *CSMOKY* 2021-08-11T01:16:25.423000Z I-6016-EP S< : icexsTrace-icexs5-20210811-1116-037.trc64:0000399 | 8 MH18000000000000000731127354 MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 00CANONICAL CODE 736062787787 001000000000879575CR000000000879575CRAUD00000000000000000000000000000013d46777ec304eadb673f30ed0487f99 2021-08-11T01:16:25.428780Z E-6053-EP R< : icexsTrace-icexs5-20210811-1116-037.trc64:0000419 | <BusMsg> <AppHdr xmlns="urn:iso:std:iso:20022:tech:xsd:head.001.001.01"> <Fr> <FIId> <FinInstnId> <BICFI>RSBKAUFSXXX</BICFI> </FinInstnId> </FIId> </Fr> <To> <FIId> <FinInstnId> <BICFI>WPACAU2SXXX</BICFI> </FinInstnId> </FIId> </To> <BizMsgIdr>RSBKAUFSXXX20210811000116253109041</BizMsgIdr> <MsgDefIdr>pacs.002.001.06</MsgDefIdr> <BizSvc>npp.stlmnt.01-sct.04</BizSvc> <CreDt>2021-08-11T01:16:25.310Z</CreDt> <Prty>NORM</Prty> </AppHdr> <Document xmlns="urn:iso:std:iso:20022:tech:xsd:pacs.002.001.06"> <FIToFIPmtStsRpt> <GrpHdr> <MsgId>RSBKAUFSXXX20210811000116253109041</MsgId> <CreDtTm>2021-08-11T01:16:25.310Z</CreDtTm> <InstgAgt> <FinInstnId> <BICFI>RSBKAUFSXXX</   And Each line represents a event in my log , So i have defined the below sourcetype settings  - [ <SOURCETYPE NAME> ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=AUTO disabled=false But Still i could see events timestamp as UTC format only in Splunk , How would i change it have to AEST Timezone for events..   Could you please help with the settings ??
Hi I am doing OKTA SAML integration with Phantom and getting the below error. SAML2 Authentication Error'NoneType' object has no attribute 'require_signature Can i know if any option to make the Au... See more...
Hi I am doing OKTA SAML integration with Phantom and getting the below error. SAML2 Authentication Error'NoneType' object has no attribute 'require_signature Can i know if any option to make the AuthnRequestsSigned="true to false? Whats the location? or any other suggestions.. I have tried disabling the signing on both okta and phantom
Hi, I am trying to return values that DO NOT MATCH the search between an index and .csv file Ex - this returns the values that are good but i don't want to see these: index=myindex TAGGING="*Agent... See more...
Hi, I am trying to return values that DO NOT MATCH the search between an index and .csv file Ex - this returns the values that are good but i don't want to see these: index=myindex TAGGING="*Agent*" | dedup DNS | join type=inner DNS [ | inputlookup linuxhostnames.csv | rename hostname as DNS]   whereas, I tried the following - this takes slightly longer to return the results but also returns only the matching values instead of the NOT MATCHING | inputlookup linuxhostnames.csv | rename hostname as DNS | search NOT [search index=myindex| fields DNS | format ]   Will appreciate some guidance here.   Thank you
I have this SPL index="_internal" fwdType=uf | dedup hostname | table hostname I want to create a macro called uf  I have the macro created like this:   I want to be able to just execute thi... See more...
I have this SPL index="_internal" fwdType=uf | dedup hostname | table hostname I want to create a macro called uf  I have the macro created like this:   I want to be able to just execute this macro in search but it doesn't look the same as if I execute the full command.   What am I doing wrong?  
I am trying to make a timeline showing different response code ranges being defined. This is the eval I am using, and I want to add the 4 different categories into a timeline dashboard panel. inde... See more...
I am trying to make a timeline showing different response code ranges being defined. This is the eval I am using, and I want to add the 4 different categories into a timeline dashboard panel. index="stuff" sourcetype="things" src_ip="1.1.1.1" dest_ip="2.2.2.2" | search TERM(attack_vector) | eval Status = case(response_code>="400" OR response_code="0", "Blocked", response_code>="202" AND response_code<="226", "Partial", response_code>="300" AND response_code<="399", "Redirect", response_code="200" OR response_code="201", "Success") I cannot for the life of me figure out what I need to put in the "stats" and "table" portion to make it show a line for each of the created categories! 
We have a Splunk instance that keeps copies of Jira tickets which have changed over time.  Anytime there is a change to a ticket, we journal most of the JSON object into Splunk as an event.  Our ind... See more...
We have a Splunk instance that keeps copies of Jira tickets which have changed over time.  Anytime there is a change to a ticket, we journal most of the JSON object into Splunk as an event.  Our index is getting large, and I think that it is affecting performance (nearly 1,000,000 events and 1 GB).  For each ticket id, I want to delete all but 1 event that is older than 6 months (keep the youngest event that is > 6 mon old).     index=jira latest=-6mon | dedup key (Gets the list of keys with events that can be deleted)   For each key, delete all but one of the events > 6mon (e.g. KEY-75)   index=jira latest=-6mon key = "KEY-75" | streamstats count as result | where result > 1 | delete  Error in 'delete' command: This command cannot be invoked after the command 'simpleresultcombiner', which is not distributable streaming. The search job has failed due to an error. You may be able view the job in the Job Inspector. index=jira latest=-6mon key = "KEY-75" | sort - _time | streamstats count as result | where result > 1 | delete Error in 'delete' command: This command cannot be invoked after the command 'sort', which is not distributable streaming. The search job has failed due to an error. You may be able view the job in the Job Inspector.
I have an index which contains data from many logfiles. I want to search for specific data in log1 and display  with field count from log2. Log1 has the url data (sites, page loads, etc) and log2 has... See more...
I have an index which contains data from many logfiles. I want to search for specific data in log1 and display  with field count from log2. Log1 has the url data (sites, page loads, etc) and log2 has the username,  with a common field X_Forwarded_For.  My search is below. I'm wanting to show a count based on username in log2  of all url clicks in log1. So far I have this: index = iis-prod host = hostname source="logfile1" cs_uri_stem="*.aspx" NOT(/_layouts/*.aspx) NOT(/_forms/*.aspx) NOT(/_login/*.aspx) NOT X_Forwarded_For = 10.* | stats count by X_Forwarded_For |sort by count desc Instead of displaying X_Forwarded_For from logfile1, I want to display the count with username from Logfile2.  I'm sure i'm making this more complicated than it needs to be, I just can't get it cleared up. 
Hello, how can I write TIME_PREFIX for props conf file for following sample event. Any help will be highly appreciated. Thank you, greatly appreciated.   INFORMATION:Metadata Deployment process sta... See more...
Hello, how can I write TIME_PREFIX for props conf file for following sample event. Any help will be highly appreciated. Thank you, greatly appreciated.   INFORMATION:Metadata Deployment process started at Tue Jun 16 11:51:47 EDT 2020. INFORMATION:Metadata Deployment process ended at Tue Jun 16 11:51:48 EDT 2020. INFORMATION:Metadata Deployment process ended at Tue Jun 16 11:51:49 EDT 2020