All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, In my requirement, if any splunk servers are got failed, need to be generated Services now incidents need to be created automatically...   How do we write the query and how do we configur... See more...
Hi Team, In my requirement, if any splunk servers are got failed, need to be generated Services now incidents need to be created automatically...   How do we write the query and how do we configure Service now incidents, please help me     
Hi Team,   In my project need to be implement High Availability servers in below Servers are using. Z1-->L4 -->SearchHead+Indexer single instanceonly(Dev)               SearchHead+Indexer single ... See more...
Hi Team,   In my project need to be implement High Availability servers in below Servers are using. Z1-->L4 -->SearchHead+Indexer single instanceonly(Dev)               SearchHead+Indexer single instanceonly(QA) DeploymentServer(QA) HearvyForwarder(Prod) DeploymentServe(Prod) -------------------------------------------------- App related below server:   Z2 --> HearvyForwarder(Prod,Dev,QA) individual Servers Z3--> HearvyForwarder(Prod,Dev,QA) individual Servers   The above App related servers are connected Deployment server and SearchHead+Indexer Note : my project there is no Cluster master   Please help how do we implement the High Availability implementation, please help the guide the above servers.    
how can i Integrate between splunk and Vectra NDR solution ? what is the full path to get fully integration ?
<Summary of Inquiry> I am working on API integration between SplunkEnterprise and Netskope Netskope API integration with SplunkEnterprise on WIndows Server 2022 Datacenter. I have made an inquiry ... See more...
<Summary of Inquiry> I am working on API integration between SplunkEnterprise and Netskope Netskope API integration with SplunkEnterprise on WIndows Server 2022 Datacenter. I have made an inquiry about trouble related to the integration. <Content of inquiry> In order to create a Netskope Account in Splunk Enterprise, I tried to add a Netskope account in the attached Configuration to create a Netskope Account in Splunk Enterprise, However, I received an error message "Error Request failed with status code 500" and was unable to create a Netskope account. So,I am unable to create a Netskope account. <Troubleshooting Situation> I have troubleshooted and confirmed the settings by browsing the following sites. Please check. ・I tried to enter telnet 127.0.0.1 8089 with cmd, but nothing came back. ・disableDefaultPort=true" in "C:\Program Files\Splunk\etc\system\local\server.conf" has been deleted. ・tools.sessions.timeout=60" in "C:\Program Files\Splunk\etc\system\default\web.conf" is already set ・mgmtHostPort = 127.0.0.1:8089" in "C:\Program Files\Splunk\etc\system\default\web.conf" is set. ・The result of running netstat -a is attached in this attachment. (Only the current communication status of the IP address and port number regarding the Syslog server is shown.) 「About 500 Internal Server Error」 https://community.splunk.com/t5/Splunk-Enterprise/500-Internal-Server-Error-%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6/m-p/434180 「500 Internal Server Error」 https://community.splunk.com/t5/Security/500-Internal-Server-Error/m-p/477677   <What I need help with> (1) Even though the management port is set and there is no "disableDefaultPort=true", we think the reason is that 127.0.0.1 8089 is not "Established".What are some possible ways to deal with this? (2) Are there any other possible causes?
Hi @madhav_dholakia , Unfortunately, Splunk Dashboard Studio does not support a full set of features for Tokens like Simple XML dashboards. So I doubt if something like this complex requirement can ... See more...
Hi @madhav_dholakia , Unfortunately, Splunk Dashboard Studio does not support a full set of features for Tokens like Simple XML dashboards. So I doubt if something like this complex requirement can be implemented.   You can try creating last months static in the dropdown, and that may work I think like, and then manually update the dashboard every month.   I hope this helps!!! Kindly upvote if it does!!!
@a_kearney - How many Search heads do you have in a cluster? Are any cluster members down? In recent incidents of cluster members being down? Are you also seeing "Consider a lower value of conf_... See more...
@a_kearney - How many Search heads do you have in a cluster? Are any cluster members down? In recent incidents of cluster members being down? Are you also seeing "Consider a lower value of conf_replication_max_push_count" warning messages in your logs?? Usually "consecutiveErrors=1" isn't bad, unlike in your situation it happens a lot, which is concerning.   I hope this helps!!! Kindly upvote if it does!!!
This may not be elegant... Try to include an assertion in the exception handling to see if it works: except Exception as e: print ("The script threw an exception.") assert False,"other exc... See more...
This may not be elegant... Try to include an assertion in the exception handling to see if it works: except Exception as e: print ("The script threw an exception.") assert False,"other exception"  After the job ran, you will notice the session fails because of assertion failure. regards, Terence
Hi @krutika_ag ... what @richgalloway said was an excellent answer.  For Splunk newbies, let me rephrase it(the url link for your ref -  https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/M... See more...
Hi @krutika_ag ... what @richgalloway said was an excellent answer.  For Splunk newbies, let me rephrase it(the url link for your ref -  https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/Monitorfilesanddirectories) as follows: How the forwarder monitors archive files In order to monitor archived files, forwarders decompress archive files, such as a TAR or ZIP file, prior to processing. Splunk then processes these files in a "single threaded format" (there are pros and cons, but that is a different topic). The following types of archive files are supported: TAR GZ BZ2 TAR.GZ and TGZ TBZ and TBZ2 ZIP Z If you add new data to an existing archive file, the forwarder reprocesses the entire file rather than just the new data. This can result in event duplication. so, to avoid duplication, you should monitor the whole archive file.  Lets say if these files are small, then you can monitor the whole archive and the license usage may not be impacted so much (the search time vs index time... should be considered clearly and well planned for this task).  One more thing to consider: are you using UF or HF      --- or both      ---- or neither(you may directly upload thru SH GUI) - Splunk Support does not support this deployment model)    hope this helped some new Splunkers, thanks. 
Hi, I have a synthetic script that sometimes ends a run as a "broken job". I see in the documentation that this happens because of an unhandled exception. So I added: try: ....      wait.until(E... See more...
Hi, I have a synthetic script that sometimes ends a run as a "broken job". I see in the documentation that this happens because of an unhandled exception. So I added: try: ....      wait.until(EC.element_to_be_clickable((By.ID, "username"))).click() except Exception as e:     print ("The script threw an exception.") But now, the script runs and if the job has a timeout exception the job status shows as "success", but I can see in the script output that it printed "The script threw an exception." How do I make it so that if an exception is thrown the script status shows as failed? Thanks, Roberto
Hi @krutika_ag , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all ... See more...
Hi @krutika_ag , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@jbanAtSplunk - It's not documented anywhere what you are trying to achieve. I would suggest opening a Splunk support ticket with Splunk.   I hope this helps!!! Kindly upvote if it does!!
Splunk cannot monitor a single file within a zip file.  You must monitor the entire zip file or have a script extract the desired file into a monitored location.
Hi All,   There are 50 zip files in a folder in those zip folders there are many other files- log/txt/png, out of which I want to monitor a specific log file.   Below is the code i have written b... See more...
Hi All,   There are 50 zip files in a folder in those zip folders there are many other files- log/txt/png, out of which I want to monitor a specific log file.   Below is the code i have written but it is failing to monitor that log file, please suggest. [monitor:///home/splunk/*.zip:./WalkbackDetails.log] disabled = false index = ziptest  
This is the query that helped me get the required output. index=_internal sourcetype=splunkd | stats count by source,host | regex source="(?:\/|\x5c)splunkd\.log$" | rex field=source "(?<installat... See more...
This is the query that helped me get the required output. index=_internal sourcetype=splunkd | stats count by source,host | regex source="(?:\/|\x5c)splunkd\.log$" | rex field=source "(?<installation_path>.*)(?:\/|\x5c)var(?:\/|\x5c)"
Everything ingested by Splunk should have props.conf settings.  Start with the "Great 8": LINE_BREAKER, SHOULD_LINEMERGE, TIME_PREFIX, TIME_FORMAT, MAX_TIMESTAMP_LOOKAHEAD, TRUNCATE, EVENT_BREAKER_EN... See more...
Everything ingested by Splunk should have props.conf settings.  Start with the "Great 8": LINE_BREAKER, SHOULD_LINEMERGE, TIME_PREFIX, TIME_FORMAT, MAX_TIMESTAMP_LOOKAHEAD, TRUNCATE, EVENT_BREAKER_ENABLE, and EVENT_BREAKER. Field extraction from events like this are tricky because the field delimiter is also an allowed character within a field.  It means using lookahead to determine if the current character is part of a field name or field value.  As it turns out, Splunk is not great with lookahead.  Try these settings to see if they work for you. Props.conf:     [mysourcetype] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true TIME_PREFIX=\s\d\s TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N%Z TRANSFORMS-extract = tripwire_fields TRUNCATE = 10000 EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)     Transforms.conf:     [tripwire_fields] REGEX = (\w+)=(.*?)(?=\s\w+=) FORMAT = $1::$2    
Self resolved. Splunk 9.1.2 was not compatible with ITSI 4.11.6. Therefore, downgrading Splunk 9.1.2 allowed us to upgrade ITSI.
Most Simplified Explanation != is a field expression that returns every event that has a value in the field, where that value does not match the value you specify. Events that do not have a value in... See more...
Most Simplified Explanation != is a field expression that returns every event that has a value in the field, where that value does not match the value you specify. Events that do not have a value in the field are not included in the results. For example, if you search for Location!="Calaveras Farms", events that do not have Calaveras Farms as the Location are returned. Events that do not have Location value are not included in the results. On the other hand, NOT is an operator that returns every event except the events that contain the value you specify. This includes events that do not have a value in the field. For example, if you search using:  NOT Location="Calaveras Farms", every event is returned except the events that contain the value “Calaveras Farms”. This includes events that do not have a Location value.   Here’s an example to illustrate the difference between the two methods. Suppose you have the following events: Table   ID Name Color Location 101M3 McIntosh Chestnut Marin Meadows 104F5 Lyra Bay   104M6 Rutherford Dun Placer Pastures 101F2 Rarity   Marin Meadows 102M7 Dash Black Calaveras Farms 102M1 Roan     101F6   Chestnut Marin Meadows 104F4 Pinkie Sorrel Placer Pastures If you search with Location!="Calaveras Farms", every event that has a value in the Location field, where that value does not match Calaveras Farms, is returned. Events that do not have a value in the Location field are not included in the results. The following events are returned: Output Table   ID Name Color Location 101M3 McIntosh Chestnut Marin Meadows 104M6 Rutherford Dun Placer Pastures 101F2 Rarity   Marin Meadows 101F6   Chestnut Marin Meadows 104F4 Pinkie Sorrel Placer Pastures   If you search with NOT Location="Calaveras Farms", every event is returned except the events that contain the value Calaveras Farms. This includes events that do not have a Location value. The following events are returned: Output Table   ID Name Color Location 101M3 McIntosh Chestnut Marin Meadows 104F5 Lyra Bay   104M6 Rutherford Dun Placer Pastures 101F2 Rarity   Marin Meadows 102M1 Roan     101F6   Chestnut Marin Meadows 104F4 Pinkie Sorrel Placer Pastures
Hi @Gomathy.Govindarajan,  I recently started using this community little regularly now, I see you posted it quite sometime back. Did you able to find solution for your issue? If yes, would you mine ... See more...
Hi @Gomathy.Govindarajan,  I recently started using this community little regularly now, I see you posted it quite sometime back. Did you able to find solution for your issue? If yes, would you mine to post the solution you applied, will help me and may help others as well. Thank you, Mahendra Shetty 
I appreciate all the help and apologize for my late response. I am still a low man on the totem pole and been trying to research more into this with the recommendations. The file gets automatically u... See more...
I appreciate all the help and apologize for my late response. I am still a low man on the totem pole and been trying to research more into this with the recommendations. The file gets automatically updated periodically with all the new intel we ingest, this one specifically regarding malicious URLs. My higher up suggested recently a recommendation from a 13 year old Splunk community post to try and fix this issues. (https://community.splunk.com/t5/Splunk-Search/Lookup-table-Limits/m-p/75336) I am not familiar with this recommendation so need to look into it. If anyone believes this is not a good recommendation from a 13 year old post then please let me know. Thank you very much.
I think the addition of a few evals can account for the error line as well. Maybe something like this? <base_search> | rex field=_raw "Processing\s+(?<process>[^\-]+)\-" | rex field=_... See more...
I think the addition of a few evals can account for the error line as well. Maybe something like this? <base_search> | rex field=_raw "Processing\s+(?<process>[^\-]+)\-" | rex field=_raw "Person\s+Name\:\s+(?<person_name>[^\,]+)\," | sort 0 +_time | streamstats reset_before="("isnotnull(process)")" values(process) as current_process | streamstats window=2 first(_raw) as previous_log | rex field=previous_log "Person\s+Name\:\s+(?<previous_log_person_name>[^\,]+)\," | eval checked_person_name=if( match(previous_log, "\-Check\s+for\s+Person\-"), 'person_name', null() ), status_error_person=if( match(previous_log, "Person\s+Name:\s+") AND match(_raw, "\-error\s+in\s+checking\s+status"), 'previous_log_person_name', null() ) | stats min(_time) as _time by current_process, status_error_person | fields + _time, current_process, status_error_person