Extract the search key (The hex string you have redacted) using either split or rex to a field called keyID substituting search_name for whatever the field is called in your data. | eval keyID=m...
See more...
Extract the search key (The hex string you have redacted) using either split or rex to a field called keyID substituting search_name for whatever the field is called in your data. | eval keyID=mvindex(split(search_name, " - "), 1) OR | rex field=search_name "Indicator - (?<keyID>[^\s]+) - " Then you can use the ITSI API endpoints to tie them to the base searches: | join type=left keyID
[| rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/kpi_base_search report_as=text
| eval value=spath(value,"{}")
| mvexpand value
| eval title = spath(value, "title"), keyID = spath(value, "_key"), frequency = spath(value, "alert_period")
| fields title, keyID, frequency ]
Hi Mario, Thanks for this, It helped me too. May I know if we can use the Properties to apply this policy only for specific events or make this policy applicable only for certain DB's in the cluste...
See more...
Hi Mario, Thanks for this, It helped me too. May I know if we can use the Properties to apply this policy only for specific events or make this policy applicable only for certain DB's in the cluster? If yes, How do we reference the Custom DB event fields? Or is it possible to provide conditions on the DB Event Details which will have the query output?
Hello, Currently we have an NFS drive which is mounted on /opt/archive directory Splunk indexer installation is in Red hat We plan to change the remote storage IP address Current entry in /...
See more...
Hello, Currently we have an NFS drive which is mounted on /opt/archive directory Splunk indexer installation is in Red hat We plan to change the remote storage IP address Current entry in /etc/fstab 192.168.24.1:/opt /opt/archive nfs vers=4,rw,intr,nosuid 0 0 1. Before un-mounting is it required to stop rolling of cold buckets to frozen? how to stop this roll? 2. After mounting the new remote drive for frozen buckets Is there a way to verify that frozen directory is receiving from cold
Hello, I've been working with AppDynamics for some time now, and I'm looking to enhance our monitoring and analytics capabilities by integrating it with Splunk. I believe this integration can offer ...
See more...
Hello, I've been working with AppDynamics for some time now, and I'm looking to enhance our monitoring and analytics capabilities by integrating it with Splunk. I believe this integration can offer a wealth of insights. Has anyone here successfully integrated AppDynamics with Splunk? I'm particularly interested in hearing about any best practices, challenges you've encountered, and the impact it has had on your application monitoring and troubleshooting efforts. Additionally, if anyone has pursued the Splunk Certification or is familiar with the certification process, could you share your experiences and any specific aspects of Splunk that you found especially relevant in the context of AppDynamics integration? I also Check this https://splunkbase.splunk.com/app/4315#:~:text=StreamWeaver%20makes%20integrating%20your%20AppDynamics,end%20observability%20and%20AIOps%20goals. Thanks in advance!
@lucky regex is short for regular expression regex101.com and regexbuddy.com (as provided by @bowesmana ) are both sites which provide ways of testing regular expressions (regex) In Splunk, the rex...
See more...
@lucky regex is short for regular expression regex101.com and regexbuddy.com (as provided by @bowesmana ) are both sites which provide ways of testing regular expressions (regex) In Splunk, the rex and regex commands both use regular expressions (as do other functions in Splunk). Whether you want rex or regex, both the sites mentioned are useful tools for working out what your particular regex should be. rex - Splunk Documentation regex - Splunk Documentation
Hi @nivets, you should save alerts results in a summary index using the collect command (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Collect). And then you can run searches o...
See more...
Hi @nivets, you should save alerts results in a summary index using the collect command (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Collect). And then you can run searches on this summary index. Ciao. Giuseppe
Hi @gcusello We used Cisco Switch Catalyst 9300 and 2960-X , i need to scan the vulnerabilities on these equipments , to know the version of Os , the ports open and other vulnerabilities. Best re...
See more...
Hi @gcusello We used Cisco Switch Catalyst 9300 and 2960-X , i need to scan the vulnerabilities on these equipments , to know the version of Os , the ports open and other vulnerabilities. Best regards
I have a alert which is running to find few values and i need to write the result of the alert to new index which has created.
I have used alert action as log event and mentioned the new index whic...
See more...
I have a alert which is running to find few values and i need to write the result of the alert to new index which has created.
I have used alert action as log event and mentioned the new index which has created to write the output of the alert. but the output is getting ingested to new index but when i tried with main (default index) the output of the alert is getting ingested.
the newly created index is working, i tried checking ingesting other data manually with files. so, what could be the issue , the alert results are not getting ingested to newly created index.
Hi @cedSplunk2023, could you better describe your requirement? used technology, interesting fields, values to identify vulnerability detected, values to identify vulnerability resolved, suppo...
See more...
Hi @cedSplunk2023, could you better describe your requirement? used technology, interesting fields, values to identify vulnerability detected, values to identify vulnerability resolved, supponing that you already acquired the logs and you extracted fields using the Add-On. Ciao. Giuseppe
Hi gcusello, When I tried to search with long timestamp then value is showing, but its format is not correct, also value is in encrypted format, Should I check configuration at application end as we...
See more...
Hi gcusello, When I tried to search with long timestamp then value is showing, but its format is not correct, also value is in encrypted format, Should I check configuration at application end as well. index="Index Name" sourcetype=* your_field=*
Hi , I am tryign to login to Search head server. It gives me error 500 Internal Server Error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request....
See more...
Hi , I am tryign to login to Search head server. It gives me error 500 Internal Server Error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage. If i put wrong password it gives wrong password error. so looks like this is not related to authentication.
Hi @AA_01, if you have no results adding your_field=* to the main search, this means that the field isn't correctly configured, check the permissions ad eventually try to extract it again. Did you ...
See more...
Hi @AA_01, if you have no results adding your_field=* to the main search, this means that the field isn't correctly configured, check the permissions ad eventually try to extract it again. Did you checked also the Verbose Mode? Ciao. Giuseppe
Below steps fixed the issue Stop the search head that has the stale KV store member. Run the command splunk clean kvstore --local. Restart the search head. This triggers the initial synchron...
See more...
Below steps fixed the issue Stop the search head that has the stale KV store member. Run the command splunk clean kvstore --local. Restart the search head. This triggers the initial synchronization from other KV store members. Run the command splunk show kvstore-status to verify synchronization.
Hi @gcusello , Thanks for the response, I have already checked with the mentioned command, but there is no output, However I am getting the test results for other configured fields.
Hi @AA_01, two checks to perform: did you checked that you're using the Verbose Mode and not the Fast or the Smart Mode? if you add to your search the filter "your_file=*", do you see the field in...
See more...
Hi @AA_01, two checks to perform: did you checked that you're using the Verbose Mode and not the Fast or the Smart Mode? if you add to your search the filter "your_file=*", do you see the field in interesting fields? index="Index Name" sourcetype=* your_field=* if there are less than 2% of events, a field isn't visualized in Interesting fields. Ciao. Giuseppe