All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have been looking into Smart Agents to simplify Agent management and our Applications run in Microsoft Windows. Looking at the documentation a smart agent installable is not available for Windows? ... See more...
I have been looking into Smart Agents to simplify Agent management and our Applications run in Microsoft Windows. Looking at the documentation a smart agent installable is not available for Windows? I am surprised given its widespread installed base.  Please correct me if I am wrong and would like to know more about this. Update : Please ignore the above there is an installation package available for Windows here, Smart Agent (appdynamics.com) Thanks.
I have a search which yields a time and correlated serial number for event A. I want to use this time and serial number to search for event B, event B must meet criteria X index="june_analytics_log... See more...
I have a search which yields a time and correlated serial number for event A. I want to use this time and serial number to search for event B, event B must meet criteria X index="june_analytics_logs_prod" message=* new_state: Diagnostic, old_state: Home* NOT message=*counts*| spath serial output=serial_number| table _time, serial_number| ```table command is just for readability```   Criteria X: Event B must occur within 30s immediately after event A Event B must have the same serial number as event A Event B message field must contain the phrase "placeholder 123" Any event that matches criteria X, I want to extract data from How can I use this data from event A to search for event B?    capture attached to show what current table looks like
I develop an app on a private Splunk Enterprise server and have a piece of code that accesses the rest api: # Use Splunk REST API to get all input parameters splunkd_uri = os.environ.get("SPLUNKD_U... See more...
I develop an app on a private Splunk Enterprise server and have a piece of code that accesses the rest api: # Use Splunk REST API to get all input parameters splunkd_uri = os.environ.get("SPLUNKD_URI", "https://127.0.0.1:8089") endpoint = f"{splunkd_uri}/servicesNS/nobody/{app_name}/data/inputs/{app_name}" headers = { 'Authorization': f'Splunk {session_key}' } response = requests.get(endpoint, headers=headers, verify=False, timeout=30) Everything works locally but when I run app-inspect before submitting to splunkcloud I get: FAILURE: If you are using requests.get to talk to your own infra with non-public PKI, make sure you bundle your own CA certs as part of your app and pass the path into requests.get as an arg. File: bin\utils\splunk_rest.py Line Number: 19 I am trying to understand how to solve this issue, because if I put a CA that matches the server I am working on, it will not satisfy the splunkcloud server that my clients will use. I think I am misunderstanding a core piece around how to utilize Rest API pragmatically. What is the correct way to go about this? Can it work both for Splunk enterprise and on Splunk Cloud? any clue or tip may help Thanks
@Alankrit- Below is the search you can use. But just to clarify few points: This search is not very efficient and do not meant for reporting and only meant for troubleshooting purposes so you can f... See more...
@Alankrit- Below is the search you can use. But just to clarify few points: This search is not very efficient and do not meant for reporting and only meant for troubleshooting purposes so you can find the source of duplicate events. index=* sourcetype=* host=* | stats count by index, sourcetype, host, _raw | where count>1   I hope this helps!!! Kindly upvote if it does!!!
Try putting the two expressions together separated by [\s\S]+ to represent any intervening text. EventCode=4634[\s\S]+Security_ID=".+?\$$" If the order of fields might vary, use this variation to m... See more...
Try putting the two expressions together separated by [\s\S]+ to represent any intervening text. EventCode=4634[\s\S]+Security_ID=".+?\$$" If the order of fields might vary, use this variation to match both orders. (?:EventCode=4634[\s\S]+Security_ID=".+?\$$")|(?:Security_ID=".+?\$$"[\s\S]+EventCode=4634)
Some general recommendations: Keep the OS, $SPLUNK_HOME, and $SPLUNK_DB on separate mount points Don't use NFS Avoid RAID 0 Use a supported file system Partition size depends on the instance ty... See more...
Some general recommendations: Keep the OS, $SPLUNK_HOME, and $SPLUNK_DB on separate mount points Don't use NFS Avoid RAID 0 Use a supported file system Partition size depends on the instance type and the amount of data to be stored.  300GB is recommended for non-indexers.  Indexer storage needs depend on index retention, replication, and use of SmartStore.
Same issue here. looking into this, an collegae of my has created an separate python script to bypass this.. but now the app only collect the first subscription, looks like the app sees one and... See more...
Same issue here. looking into this, an collegae of my has created an separate python script to bypass this.. but now the app only collect the first subscription, looks like the app sees one and then stops.
Thank you for your notable comment. I suspected that my configuration didn't work because of indexed extraction. But I hadn't time to check and I wasn't sure about it Talking about preamble, I te... See more...
Thank you for your notable comment. I suspected that my configuration didn't work because of indexed extraction. But I hadn't time to check and I wasn't sure about it Talking about preamble, I tested settings that you mentioned a couple of times, but each time it worked worse than the nullQueue approach Maybe I just was not enough attentive...)
Yup. That is one of ways to handle it.  
That does indeed seem strange because you should be getting events into an index regardless of an add-on. The only things that could make you not see the events would be bad timestamp parsing (but th... See more...
That does indeed seem strange because you should be getting events into an index regardless of an add-on. The only things that could make you not see the events would be bad timestamp parsing (but that would happen regardless of destination index), bad timerange you're searching (ditto) or no permissions for the cisco index (but as you're saying you've created the index I'm assuming you've got admin rights here). Try to run | tstats count where index=cisco by source sourcetype over all time and see if you get any results
Hi @fahimeh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @fahimeh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @PickleRick  Thanks for your reply, and knowledge about source can called in the first position. Sorry i don't know that because in many case i got it always solved when i add index=* before the... See more...
Hi @PickleRick  Thanks for your reply, and knowledge about source can called in the first position. Sorry i don't know that because in many case i got it always solved when i add index=* before the source.  And with your query i only get 0 count, so i think it because my client don't ingest in the Endpoint. Thankyou for your reply and your information.  Danke
thank you I will test and tell you exactly how it worked.
Hi @fahimeh , use the rule you need, e.g. if the haour cannot be 11 AM, you can insert in your search time_hour|=11. It depends on your requirements. Ciao. Giuseppe
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest a... See more...
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest and host SMF in IZPCA. What are customer options to consume reports in Splunk via IBM Z Performance and Capacity Analytics?
Correct, it is not a supported db driver, that is what I'm wondering here; what attributes does the agent send to get Observability Cloud to recognize it as database? For instance, I'm populating thi... See more...
Correct, it is not a supported db driver, that is what I'm wondering here; what attributes does the agent send to get Observability Cloud to recognize it as database? For instance, I'm populating things like `db.name`, `db.statement`, and `db.system` and wondering if any further values could be populated (either `sf_` values or opentelemetry semantic conventions) to get this to work the way I want
Thank you for your reply Which time rules can I use in a search? Most time-related commands include | (like eval).
Hi @PickleRick  Thanks for the support. The reason for the | stats | chart is to distinct my data by user. If I do not do this then I get multiple entries per user for each url. This allows for... See more...
Hi @PickleRick  Thanks for the support. The reason for the | stats | chart is to distinct my data by user. If I do not do this then I get multiple entries per user for each url. This allows for a user to only hit one url per week and then count them.  I will try the suggestion. I recently moved from kql to spl and will try and figure out the format for timechart and fieldformat. Thank you!
Splunk REST commands return information about the current state of the service.  They are not historical. Disk space used by search jobs is ephemeral.  Once the job expires (usually in 10 minutes), ... See more...
Splunk REST commands return information about the current state of the service.  They are not historical. Disk space used by search jobs is ephemeral.  Once the job expires (usually in 10 minutes), the disk space is released so a monthly total of disk usage is pretty meaningless. What problem are you trying to solve?
yes, first I cedared index and then input. but it didn't work, only main index work