All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My Splunk alerts are configured to send an e-mail when triggered. How do I make sure that Splunk only sends one e-mail per violation? It seems to be sending multiple emails everytime for same violati... See more...
My Splunk alerts are configured to send an e-mail when triggered. How do I make sure that Splunk only sends one e-mail per violation? It seems to be sending multiple emails everytime for same violation.  Settings are as follows Run on cron schedule time range: -24h cron: 42 * * * * trigger when number of results is >0 trigger : Once throttle : 60s
Hi everyone, I just wanted to do a quick search in URLs requested in Splunk but cannot get the directory traversal string  (../../../../ o similar) to stick - it gets stripped from the query.  I've ... See more...
Hi everyone, I just wanted to do a quick search in URLs requested in Splunk but cannot get the directory traversal string  (../../../../ o similar) to stick - it gets stripped from the query.  I've tried using quotes and it seems escaping shouldn't be necessary.   Any suggestions? Thanks  
Hi Team. I have an alert with throttle value defined, for example 4 hours. If the alert is generated at 4 am, subsequent alerts are suppressed until 8 am. However, I need to generate the alert at 6 ... See more...
Hi Team. I have an alert with throttle value defined, for example 4 hours. If the alert is generated at 4 am, subsequent alerts are suppressed until 8 am. However, I need to generate the alert at 6 am if the alert condition is met no matter if we are in the throttle period or not. The reason is that working hour start at 6 am, we have a hotline active and we need to make sure that everything is up and running. The hotline does not know about the alert generated at 4 am.
Hi All, I have the below sample events in my log data i.e. in UTC format , i want Splunk to change the event time to AEST time. I Assume Splunk would definitely convert in to AEST format since the c... See more...
Hi All, I have the below sample events in my log data i.e. in UTC format , i want Splunk to change the event time to AEST time. I Assume Splunk would definitely convert in to AEST format since the cloud we use for Australian project/region.   My Sample Data looks like below in UTC format - 2021-08-11T01:16:25.373937Z I-6083-EP S< : icexsTrace-icexs5-20210811-1116-037.trc64:0000298 | X 8 NRRS202108111116250196534269 N ack_nak_response=ack 2021-08-11T01:16:25.381943Z I-6016-EP R> : icexsTrace-icexs5-20210811-1116-037.trc64:0000314 | 8 MH18000000000000000731127354 P AMQ LUXP112 , ` * MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 CANONICAL CODE 736062787787 2021-08-11T01:16:25.381991Z E-6016-EP S> : icexsTrace-icexs5-20210811-1116-037.trc64:0000323 | _ *SAMPL1* SW051001 MHS18P1 SWLP1 ZP11SIV HXU4P73A MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 CANONICAL CODE 736062787787 2021-08-11T01:16:25.422824Z E-6016-EP R< : icexsTrace-icexs5-20210811-1116-037.trc64:0000392 | ' MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 00CANONICAL CODE 736062787787 001000000000879575CR000000000879575CRAUD00000000000000000000000000000013d46777ec304eadb673f30ed0487f99 *CSMOKY* 2021-08-11T01:16:25.423000Z I-6016-EP S< : icexsTrace-icexs5-20210811-1116-037.trc64:0000399 | 8 MH18000000000000000731127354 MHS18P1 020420210811111624901010P1-001SW10.15.35.81 516fc0b3f6cd49abac2247601381e9c8 EPAG CTBA00 00CANONICAL CODE 736062787787 001000000000879575CR000000000879575CRAUD00000000000000000000000000000013d46777ec304eadb673f30ed0487f99 2021-08-11T01:16:25.428780Z E-6053-EP R< : icexsTrace-icexs5-20210811-1116-037.trc64:0000419 | <BusMsg> <AppHdr xmlns="urn:iso:std:iso:20022:tech:xsd:head.001.001.01"> <Fr> <FIId> <FinInstnId> <BICFI>RSBKAUFSXXX</BICFI> </FinInstnId> </FIId> </Fr> <To> <FIId> <FinInstnId> <BICFI>WPACAU2SXXX</BICFI> </FinInstnId> </FIId> </To> <BizMsgIdr>RSBKAUFSXXX20210811000116253109041</BizMsgIdr> <MsgDefIdr>pacs.002.001.06</MsgDefIdr> <BizSvc>npp.stlmnt.01-sct.04</BizSvc> <CreDt>2021-08-11T01:16:25.310Z</CreDt> <Prty>NORM</Prty> </AppHdr> <Document xmlns="urn:iso:std:iso:20022:tech:xsd:pacs.002.001.06"> <FIToFIPmtStsRpt> <GrpHdr> <MsgId>RSBKAUFSXXX20210811000116253109041</MsgId> <CreDtTm>2021-08-11T01:16:25.310Z</CreDtTm> <InstgAgt> <FinInstnId> <BICFI>RSBKAUFSXXX</   And Each line represents a event in my log , So i have defined the below sourcetype settings  - [ <SOURCETYPE NAME> ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=AUTO disabled=false But Still i could see events timestamp as UTC format only in Splunk , How would i change it have to AEST Timezone for events..   Could you please help with the settings ??
Hi I am doing OKTA SAML integration with Phantom and getting the below error. SAML2 Authentication Error'NoneType' object has no attribute 'require_signature Can i know if any option to make the Au... See more...
Hi I am doing OKTA SAML integration with Phantom and getting the below error. SAML2 Authentication Error'NoneType' object has no attribute 'require_signature Can i know if any option to make the AuthnRequestsSigned="true to false? Whats the location? or any other suggestions.. I have tried disabling the signing on both okta and phantom
Hi, I am trying to return values that DO NOT MATCH the search between an index and .csv file Ex - this returns the values that are good but i don't want to see these: index=myindex TAGGING="*Agent... See more...
Hi, I am trying to return values that DO NOT MATCH the search between an index and .csv file Ex - this returns the values that are good but i don't want to see these: index=myindex TAGGING="*Agent*" | dedup DNS | join type=inner DNS [ | inputlookup linuxhostnames.csv | rename hostname as DNS]   whereas, I tried the following - this takes slightly longer to return the results but also returns only the matching values instead of the NOT MATCHING | inputlookup linuxhostnames.csv | rename hostname as DNS | search NOT [search index=myindex| fields DNS | format ]   Will appreciate some guidance here.   Thank you
I have this SPL index="_internal" fwdType=uf | dedup hostname | table hostname I want to create a macro called uf  I have the macro created like this:   I want to be able to just execute thi... See more...
I have this SPL index="_internal" fwdType=uf | dedup hostname | table hostname I want to create a macro called uf  I have the macro created like this:   I want to be able to just execute this macro in search but it doesn't look the same as if I execute the full command.   What am I doing wrong?  
I am trying to make a timeline showing different response code ranges being defined. This is the eval I am using, and I want to add the 4 different categories into a timeline dashboard panel. inde... See more...
I am trying to make a timeline showing different response code ranges being defined. This is the eval I am using, and I want to add the 4 different categories into a timeline dashboard panel. index="stuff" sourcetype="things" src_ip="1.1.1.1" dest_ip="2.2.2.2" | search TERM(attack_vector) | eval Status = case(response_code>="400" OR response_code="0", "Blocked", response_code>="202" AND response_code<="226", "Partial", response_code>="300" AND response_code<="399", "Redirect", response_code="200" OR response_code="201", "Success") I cannot for the life of me figure out what I need to put in the "stats" and "table" portion to make it show a line for each of the created categories! 
We have a Splunk instance that keeps copies of Jira tickets which have changed over time.  Anytime there is a change to a ticket, we journal most of the JSON object into Splunk as an event.  Our ind... See more...
We have a Splunk instance that keeps copies of Jira tickets which have changed over time.  Anytime there is a change to a ticket, we journal most of the JSON object into Splunk as an event.  Our index is getting large, and I think that it is affecting performance (nearly 1,000,000 events and 1 GB).  For each ticket id, I want to delete all but 1 event that is older than 6 months (keep the youngest event that is > 6 mon old).     index=jira latest=-6mon | dedup key (Gets the list of keys with events that can be deleted)   For each key, delete all but one of the events > 6mon (e.g. KEY-75)   index=jira latest=-6mon key = "KEY-75" | streamstats count as result | where result > 1 | delete  Error in 'delete' command: This command cannot be invoked after the command 'simpleresultcombiner', which is not distributable streaming. The search job has failed due to an error. You may be able view the job in the Job Inspector. index=jira latest=-6mon key = "KEY-75" | sort - _time | streamstats count as result | where result > 1 | delete Error in 'delete' command: This command cannot be invoked after the command 'sort', which is not distributable streaming. The search job has failed due to an error. You may be able view the job in the Job Inspector.
I have an index which contains data from many logfiles. I want to search for specific data in log1 and display  with field count from log2. Log1 has the url data (sites, page loads, etc) and log2 has... See more...
I have an index which contains data from many logfiles. I want to search for specific data in log1 and display  with field count from log2. Log1 has the url data (sites, page loads, etc) and log2 has the username,  with a common field X_Forwarded_For.  My search is below. I'm wanting to show a count based on username in log2  of all url clicks in log1. So far I have this: index = iis-prod host = hostname source="logfile1" cs_uri_stem="*.aspx" NOT(/_layouts/*.aspx) NOT(/_forms/*.aspx) NOT(/_login/*.aspx) NOT X_Forwarded_For = 10.* | stats count by X_Forwarded_For |sort by count desc Instead of displaying X_Forwarded_For from logfile1, I want to display the count with username from Logfile2.  I'm sure i'm making this more complicated than it needs to be, I just can't get it cleared up. 
Hello, how can I write TIME_PREFIX for props conf file for following sample event. Any help will be highly appreciated. Thank you, greatly appreciated.   INFORMATION:Metadata Deployment process sta... See more...
Hello, how can I write TIME_PREFIX for props conf file for following sample event. Any help will be highly appreciated. Thank you, greatly appreciated.   INFORMATION:Metadata Deployment process started at Tue Jun 16 11:51:47 EDT 2020. INFORMATION:Metadata Deployment process ended at Tue Jun 16 11:51:48 EDT 2020. INFORMATION:Metadata Deployment process ended at Tue Jun 16 11:51:49 EDT 2020   
Hello, I am a source file which has  events with 2 different file formats. How would I write  TIME_FOMAT for my PROPS Configuration file. any help will be highly appreciated. Thank you..   2021/05... See more...
Hello, I am a source file which has  events with 2 different file formats. How would I write  TIME_FOMAT for my PROPS Configuration file. any help will be highly appreciated. Thank you..   2021/05/30 16:28:12    JAVA_OPTION_USESERVERCIPHERSUITESORDER:-DuseServerCipherSuitesOrder=true 2021/05/30 16:28:12    JAVA_OPTION_XMX:-Xmx196m 2021-05-30 16:28:27.872  INFO 28709 --- [           main] .c.r.b.i.e.ConsulBootstrapPropertyLoader 2021-05-30 16:28:43.677  INFO 28709 --- [           main] com.sas.studio.ApplicationPetrichor
Is there any difference in placing an `etc/passwd` file in place as opposed to using an `etc/system/local/user-seed.conf` during a scripted install? As I understand it the `user-seed.conf` produces ... See more...
Is there any difference in placing an `etc/passwd` file in place as opposed to using an `etc/system/local/user-seed.conf` during a scripted install? As I understand it the `user-seed.conf` produces an `etc/passwd` if the latter doesn't exist and then the former is removed?
Example: a series of events all have the same incident number (1170820) outlining the lifecycle of the ticket (from open to close). I want to be able to create a field for when the incident was clos... See more...
Example: a series of events all have the same incident number (1170820) outlining the lifecycle of the ticket (from open to close). I want to be able to create a field for when the incident was closed so that the time can be easily identified on a dashboard.  (Time | Closed Time | Computer Name | ETC). How can I isolate a sub-search to track the latest ticket in a series of incidents so that I can set a field for "Closed time"?    
I am getting an error after loading Splunk forwarder on a Linux server (this same load is on other Linux servers with no issues): 6a257470b sp:7ffeb4a673a0 error:0 in liboneagentproc.so[7fd6a2560000+... See more...
I am getting an error after loading Splunk forwarder on a Linux server (this same load is on other Linux servers with no issues): 6a257470b sp:7ffeb4a673a0 error:0 in liboneagentproc.so[7fd6a2560000+84000] Aug 12 12:04:18 ladcivrnvpt03 kernel: traps: splunkd[66184] trap invalid opcode ip:7fd6a257470b sp:7ffeb4a673a0 error:0 in liboneagentproc.so[7fd6a2560000+84000] I run the ./splunk start --accept-license and it appears to start, but if you ps -eaf | grep splunk, nothing is started.  No log files are generated in .../splunkforwarder/var/log/splunk for the splunkd process either.
Hello, I was trying to write PROPS configuration file following sample events... 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwa... See more...
Hello, I was trying to write PROPS configuration file following sample events... 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462|   since it has pipe "|"..here is what I wrote..but not working... Any help will be highly appreciated...thank you so much.. SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) INDEXED_EXTRACTIONS = psv TIME_FORMAT = %Y%m%d %H:%M:%S:%Q TIMESTAMP_FIELDS = TIMESTAMP  
Hello, What would be my TIME_FORMAT for prop configuration file for this events 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwa... See more...
Hello, What would be my TIME_FORMAT for prop configuration file for this events 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| I wrote this not covering entire range TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%f%z   Any help will be highly appreciated. Thank you so much.    
Hi  I have seen that when I am doing a post request to "https://splunk_host:8088/services/collector/event" with validate_cert=False its successfully sending the data to Splunk from my application. W... See more...
Hi  I have seen that when I am doing a post request to "https://splunk_host:8088/services/collector/event" with validate_cert=False its successfully sending the data to Splunk from my application. Where as when I tried with validate_cert=True i am getting errors like "Self signed Certificate error " or  Cannot connect to host localhost:8088 ssl:default [[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)] so what should I do to not get this error. 
We use cribl for field extraction. `Action` is a field that is being parsed from cribl and it should be a indexed field in splunk. Did a initial search with the query "index=client* sourcetype=uni... See more...
We use cribl for field extraction. `Action` is a field that is being parsed from cribl and it should be a indexed field in splunk. Did a initial search with the query "index=client* sourcetype=unix_auth"  This returns 6 failure in the last 4 hours.  When I use the search "index=client* sourcetype=unix_auth action=fail*". It returns all 6 failed events.  when I then change the search to "index=client* sourcetype=unix_auth action=failure" It does not return any events.  But when I use the " :: " in the search "index=client* sourcetype=unix_auth action::failure" It returns all the events.  Sample event:  
I have a data in Splunk like index="main" Fname Country fname1 USA fname1 USA fname3 USA   I want to add and change some data where Fname="fname1"  I want to edit that Country... See more...
I have a data in Splunk like index="main" Fname Country fname1 USA fname1 USA fname3 USA   I want to add and change some data where Fname="fname1"  I want to edit that Country = UK and add field of Phone =123   The final data will be   Fname Phone Country fname1 123 UK fname1 123 UK fname3   USA How can I do that?