All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I’ve installed snmp app on my splunk ..How do i activate the app .i’ve already got trail license from baboonbones.
Is it possible to store dynamic field values in a variable and later reuse it in the same search string in the subsearch?  I want to built this search string for an alert. Can someone help me out o... See more...
Is it possible to store dynamic field values in a variable and later reuse it in the same search string in the subsearch?  I want to built this search string for an alert. Can someone help me out on this?
  No data is getting displayed on the dashboard.   Following is the query. index=main sourcetype=wms_oracle_sessions | bucket span=5m _time | stats count AS sessions by _time,warehouse,machine,pr... See more...
  No data is getting displayed on the dashboard.   Following is the query. index=main sourcetype=wms_oracle_sessions | bucket span=5m _time | stats count AS sessions by _time,warehouse,machine,program | sum(sessions) AS wsessions by _time,warehouse | timechart avg(wsessions) by warehouse   We know the reason for data not getting displayed on dashboard. Sourcetype wms_oracle_sessions does not exist. Does it help if we create the sourcetype  wms_oracle_sessions
Hi, I have a search but it doesn't seem to work. I need to extract the transaction per second data and for that I was using timechart but it is restricting the rows with below error - The specified ... See more...
Hi, I have a search but it doesn't seem to work. I need to extract the transaction per second data and for that I was using timechart but it is restricting the rows with below error - The specified span would result in too many (>50000) rows. Then i thought of using bucket command with stats but it doesn't seem to work correctly. It doesn't give me the data per second which is actually my requirement. See the below table for output index=test sourcetype=ssl_access_combined requested_content="/myapp" | bucket span=1s _time | stats count by _time   _time count 2020-07-09 00:00:06 1 2020-07-09 00:00:27 1 2020-07-09 00:00:38 1 2020-07-09 00:00:40 1   can someone advice on this? I am not sure why it is happening.
status success success   failure failure error error   I want output like   status         status 1 status2 success   failure      error success   failure    error
Hi All, We are using a custom version of the Service Now Add On, to suit our SN structure, but that should not matter. I need to pull the entire Splunk event into a Service Now Field, the 'Detailed D... See more...
Hi All, We are using a custom version of the Service Now Add On, to suit our SN structure, but that should not matter. I need to pull the entire Splunk event into a Service Now Field, the 'Detailed Description' field. Is there a term like $result.event$ that will insert the whole Splunk Event? Instead of having to list every field, like so: $result.description$ $result.classid$ $result.priority$ $result.statelastmodified$ etc, etc, etc... Thanks for any help you can offer.
Hello experts, I am using makeresults command to create a macro like below: | `get_indexes_by_args(1)` And the macro will return the string like below: index IN ("apps", "_apps") Now I want to p... See more...
Hello experts, I am using makeresults command to create a macro like below: | `get_indexes_by_args(1)` And the macro will return the string like below: index IN ("apps", "_apps") Now I want to pass this macro to another macro. How can I solve it? It will be like this: | `get_indexes_by_args("app")` "/api/" | ....  
Greetings,    How to add all of my alerts: Into my dashboard? :     Thanks in advance!
              Im sending a large payload of JSON data to splunk (1000 events) over HEC but when it reaches splunk it does not split the event and thinks its just 1 large event. The JSON is val... See more...
              Im sending a large payload of JSON data to splunk (1000 events) over HEC but when it reaches splunk it does not split the event and thinks its just 1 large event. The JSON is valid but its to do with the first part of the JSON thats the issue. It shows as per the below:       { "expand": "schema,names", "startAt": 0, "maxResults": 50, "total": 1253, "issues": [       If i remove this manually and then the correlating bottom brackets and send manually, all the events are parsed individually.    The second problem i have is that we are on managed splunk cloud so i dont have access to props.conf to amend the line breaker. Can anyone suggest any other way round this?   Im using spyder python as the middle man to send the load and also testing with postman? 
I have my output table created from the mentioned query.  index=foo host=bla03u source=*.log sourcetype=bol_logs | chart values(Status) by Application_Name,Country,Transaction_Name Source table: ... See more...
I have my output table created from the mentioned query.  index=foo host=bla03u source=*.log sourcetype=bol_logs | chart values(Status) by Application_Name,Country,Transaction_Name Source table: Application_Name Country Transaction_Name Status App1 Australia Homepage 0 App1 Australia Login 0 App2 Singapore Homepage 1 App3 China Homepage 0 App3 China Login 1   Output table: Appplication_Name Country Homepage Login App1 Australia 0 0 App2 Singapore 1   App3 China 0 1   Status : 0 - 'Success' 1 - 'Failure' From this output table, I want to create one geomap with bubbles. My app should shows in red, if value '1' is present anywhere and app should shows in green, if both Homepage and Login contains value '0'. Please suggest how to achieve this.
I'm using Splunk Enterprise 8.0.4.1. I'm looking to upload Snort logs (version 2.9.16) manually (via Settings -- Add Data). I see there is a Source Type under Network & Security for Snort. However, ... See more...
I'm using Splunk Enterprise 8.0.4.1. I'm looking to upload Snort logs (version 2.9.16) manually (via Settings -- Add Data). I see there is a Source Type under Network & Security for Snort. However, I can't tell which Snort output type it expects. I tried the following output formats, but none were interpreted correctly: alert_fast alert_full alert_csv unified2 Note: I am not using the Snort for Splunk nor the Splunk for Snort apps. I am attempting to use the built-in source type.
We need to pull events into Splunk from an Azure Event Hub, and the "Microsoft Azure Add on" looks to be the best option. Our organisational policy restricts us to RHEL (i.e. Ubuntu or other distros... See more...
We need to pull events into Splunk from an Azure Event Hub, and the "Microsoft Azure Add on" looks to be the best option. Our organisational policy restricts us to RHEL (i.e. Ubuntu or other distros are not an option) so I intend to install the add-on on a Heavy Forwarder running on RHEL 7.8. As we are still running Splunk v7.2.5.1 I will be installing v2.1.1 of the add-on, however I note that the README for that version indicates that only Ubuntu or Darwin are supported for the Event Hub input for this version of the add-on i.e: Platforms: Unbuntu or Darwin for Event Hubs. All other inputs are platform independent However, in other related issues it looks like the add-on has run successfully for the event hub input on RHEL as late as 7.7 as noted by @jconger  in Microsoft Azure Add-on for Splunk (TA-MS-AAD) Version 2.0.0 - No Event Hub Data Ingesting. So two questions: Will this work i.e. will I be able to pull events from an Azure Event hub using this blend of versions and distros? What issues/errors should I expect (if any)? Thanks.    
Hi,    I am using below CURL to export data in JSON format, in this command, may I know how to add the exact date and time to search the results? For instance if I need to search the results from 8t... See more...
Hi,    I am using below CURL to export data in JSON format, in this command, may I know how to add the exact date and time to search the results? For instance if I need to search the results from 8th of Wednesday 2020 10am (May i know how to give this time in command?)    curl -k -u admin:password -d search="savedsearch %22testsavedsearch%22" -d earliest_time=-24h@h -d output_mode="json" https://splunk-api-url:8089/servicesNS/nobody/appname/search/jobs/export > json.txt
Hello  Splunkers, Please advise how to use regex to extract the below specific fields from _raw data and also add/rename the field name.The Index is a summary Index  Sample Raw Data: "cutom_id":"n... See more...
Hello  Splunkers, Please advise how to use regex to extract the below specific fields from _raw data and also add/rename the field name.The Index is a summary Index  Sample Raw Data: "cutom_id":"nuyc0989","group_na":"vc_iod","kit_num":"tach-98" "cutom_id":"nuyc0989","group_na":"no_eng","kit_num":"vch-76" "cutom_id":"nuyc0989","group_na":"vc_hk","kit_num":"tach-k89" I only want to extract {field:value} of "group_na" (rename field to assigned_to) & "kit_num" (rename field to Tax_ID) in the search results for all the _raw data of the summary index. Below search query is not extracting the required field from the raw data ,please advise  Search Query -  index=<summary_index> | rex field=_raw "\"group_na\": (?<assgined_to>*)"  
I want to know which email alerts includes a specific email address so I can remove that in the recipients.
We are suddenly receiving the following error every time we do a peer search from one of our index servers.  The other index server is not giving us the error.  It occurs during scheduled searches as... See more...
We are suddenly receiving the following error every time we do a peer search from one of our index servers.  The other index server is not giving us the error.  It occurs during scheduled searches as well as ad hoc searches. "Search process did not exit cleanly, exit_code=-1, description=exited with code -1. Please look in search.log for this peer in the job inspector for more info." The search peer logs the following error in C:\Program Files\Splunk\var\run\splunk\dispatch\remote_....\search.log "ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: Application does not exist: search" What does this error mean and how it can be corrected? Thank you.            
Hi Everyone. Thanks in advance for any help. I am trying to extract some fields (Status, RecordsPurged)  from a JSON on the following _raw text:       {"": "INFO : 2020-07-09T01:11:08Z : [databa... See more...
Hi Everyone. Thanks in advance for any help. I am trying to extract some fields (Status, RecordsPurged)  from a JSON on the following _raw text:       {"": "INFO : 2020-07-09T01:11:08Z : [database@test.com]: {\"Purging_Results_Test\": {\"NewPurging\": 1, \"Status\":\"Successful\", \"VacuumEnabled\": true, \"RecordsPurged\": 6646, \"StartTime\":\"8-Jul-2020 18:03:07\", \"EndTime\":\"8-Jul-2020 18:11:08\", \"Duration(min)\":8.02}}"}       Any Ideas that might help me out?   Thank you so much.
Hi there! New user here, I am looking to simplify our troubleshooting work here at work by doing the following: 1) When an Alert is triggered (Regardless of the reason/search parameters) 2) A su... See more...
Hi there! New user here, I am looking to simplify our troubleshooting work here at work by doing the following: 1) When an Alert is triggered (Regardless of the reason/search parameters) 2) A subsequent report will be sent after the Alert is triggered. (AKA the Search parameters one would be looking to use to better investigate the alert)  Is this possible?
At this point, I'm just interested in knowing if Splunk will be able to run. Data onboarding will occur later, and that's when the computing power shall be scaled up to 16 CPU at minimum across all t... See more...
At this point, I'm just interested in knowing if Splunk will be able to run. Data onboarding will occur later, and that's when the computing power shall be scaled up to 16 CPU at minimum across all the Splunk components in use. Only 8 users shall be accessing Splunk, with only a few running concurrent searches after data onboarding is completed. We are planning to only ingest 100 GB/day and having a retention period of 30 days, while older data shall be archived in object storage. Components: 2 ES search heads, 2 ad-hoc search heads (split across 2 sites): 4 CPU each 4 Indexers (clustered across 2 sites): 4 CPU each 2 Cluster Masters (split across 2 sites): 4 CPU each 2 Deployment servers (split across 2 sites): 4 CPU each 2 Heavy Forwarders (split across 2 sites): 4 CPU each
  I have the query below, but i i dont want the services to like this.. how can i get the names of the services to be visualized on the headers having the status running showing under each service. ... See more...
  I have the query below, but i i dont want the services to like this.. how can i get the names of the services to be visualized on the headers having the status running showing under each service. and the server to be in the far left. I want it to be like this with the services being where the blue colour is and on the left to be the server so i can have servers , services and status of all my servers