All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

how to filter up/down logs on a Nexus switch
 Hi, This is a raw log  Job=[IN-SNMMIS-DLY]],  I am trying to build regex just the words " IN-SNMMIS-DLY]"  and ignore the parenthesis      
to do Splunk search with the help of API I am getting 404 error while doing this call response = self.session.get(self.base_url + '/servicesNS/'+self.username+'/search/auth/login', data=payload)  ... See more...
to do Splunk search with the help of API I am getting 404 error while doing this call response = self.session.get(self.base_url + '/servicesNS/'+self.username+'/search/auth/login', data=payload)   can someone please tell me why this is happening?
Hello, Here's my search:   index="blah" sourcetype="blah" severity="*" dis_name IN ("*") "*" AND NOT 1=0 | rest of the query   Why do they use AND NOT 1=0 here?  Even without this the results ar... See more...
Hello, Here's my search:   index="blah" sourcetype="blah" severity="*" dis_name IN ("*") "*" AND NOT 1=0 | rest of the query   Why do they use AND NOT 1=0 here?  Even without this the results are same. I just want to know why do they use this.  Any help would be appreciated! Thankyou  
We can see ardum calls is being sent with EUM Keys to cdn but still we can’t see any data over APPD controller.
Hello, I am trying to integrate Splunk and Resilient and faced with the following problem: in adaptive response I have mapped all required and interesting fields to be send to Resilient. After ev... See more...
Hello, I am trying to integrate Splunk and Resilient and faced with the following problem: in adaptive response I have mapped all required and interesting fields to be send to Resilient. After event is triggered - only raw data comes to SOAR. I have checked no errors on splunk side. On Resilient side there was error, but I have also fixed it - no luck com.co3.domain.exceptions.FieldsRequiredException: The following fields are required: 'cs_cloud_url','cs_sensor_id' com.ibm.resilient.common.domain.exceptions.Co3IllegalArgumentException: Incident name cannot be null/empty Do you have any ideas why only raw data comes from splunk?   Thank you
I have a dashboard that has 5 single value charts in 4 rows and all these rows display collective information about more than 1 process and now I'm using a drill down to a new dashboard to display de... See more...
I have a dashboard that has 5 single value charts in 4 rows and all these rows display collective information about more than 1 process and now I'm using a drill down to a new dashboard to display detailed info about them. Is there a way we can add a button or split view kind of thing in the same dashboard when someone clicks on that button it should display that and should have hide or unhide functionality rather than using drilldown.Any help is highly appreciated.Thanks
Hi Community, I have a inputs.conf monitor that looks like this [monitor:///var/log/logfiles/.../app.log] index=englogs sourcetype=eng:custom The above monitor will cover these paths to the ap... See more...
Hi Community, I have a inputs.conf monitor that looks like this [monitor:///var/log/logfiles/.../app.log] index=englogs sourcetype=eng:custom The above monitor will cover these paths to the app.log files /var/log/logfiles/database/eng/comm/surface/app.log /var/log/logfiles/trunk/sec/comm/water/app.log /var/log/logfiles/other/fin/app.log And many, many more... I have a file that I want to sourcetype as access_combined (not eng:custom). /var/log/logfiles/scapes/web01/app.log This path falls within the scope of the above monitored stanza. What is the best way to accomplish this? Do I use a blacklist in the .../app.log eng:custom monitor and then create another monitor stanza for the web01/app.log access_combined that immediately follows this? Thank you
I am using python 3.9.5 splunk enterprise version 8+ splunk-python-sdk latest. My enterprise splunk supports TLS1.2 only, is it possible to use a specific TLS version with the splunk-python-sdk... See more...
I am using python 3.9.5 splunk enterprise version 8+ splunk-python-sdk latest. My enterprise splunk supports TLS1.2 only, is it possible to use a specific TLS version with the splunk-python-sdk Can someone help with this?      
Universal forwarder setup wizard ended prematurely on Windows 10. I've tried all the suggestions from the thread that had similar issues and it didn't work. Thanks in advance!
Hello, we are currently trying the add-in "Splunk Add-on for HAProxy". We want to analyse traffic and performance. According to the documentation,  the add-on comes with "prebuilt panels" to anal... See more...
Hello, we are currently trying the add-in "Splunk Add-on for HAProxy". We want to analyse traffic and performance. According to the documentation,  the add-on comes with "prebuilt panels" to analyze data. After not finding them in Splunk various menus, I went into the source code, and there isn't even a single XML in the file (hence not a change to find a prebuilt panels). Are they missing from some versions? are they available separately ? Thank you
I am looking for best practices for determining when I have reached the limits of the number of data inputs that should be set up on a heavy forwarder. I have an existing heavy forwarder where I am... See more...
I am looking for best practices for determining when I have reached the limits of the number of data inputs that should be set up on a heavy forwarder. I have an existing heavy forwarder where I am running DB Connect to query ~20 different databases and frequencies. On this same heavy forwarder I have ~250 data inputs that query rest api's for various storage appliance data. I am experiencing splunk daemon stability issues when my Linux server is rebooted or the Splunk daemon is restarted. The CPU load will max out on the configuration and cause the Splunk daemon to be shut down. My heavy forwarder is a virtual server with 14 vcpu and 16GB memory. It is running on RHEL7 with the ulimits set as Splunk specified. Are there any documented configurations for heavy forwarders? Is there anything that might help besides trying to increase resources on this server such as configuration file settings?
Using regex, what is the syntax, to trim a timestamp formatted like 2022-01-06 01:51:23 UTC so that it only reflects the date and hour, like this  2022-01-06 01? 
Hello Splunk Experts: From a system, we receive following events in splunk. I would like to get the event which doesn't have logEvent as Received but has only logEvent as Delivered. traceId field... See more...
Hello Splunk Experts: From a system, we receive following events in splunk. I would like to get the event which doesn't have logEvent as Received but has only logEvent as Delivered. traceId field will have same value on both Received and Delivered events. Here in the below example, traceId=101 is such an event.     {"logEvent":"Received","traceId": "100","message":"Inbound received", "id" : "00991"} {"logEvent":"Delivered","traceId": "100","message":"Inbound sent", "id" : "00991-0"} {"logEvent":"Delivered","traceId": "101","message":"Inbound sent", "id" : "00992-0"} {"logEvent":"Received","traceId": "102","message":"Inbound received","id" : "00993"} {"logEvent":"Delivered","traceId": "102","message":"Inbound sent","id" : "00993-0"}    
If I had logs for the `_internal` index and logs for a `linux_os` index on a Heavy Forwarder, does the HF prioritize the `linux_os` index data prior to the `_internal` data on the host? Is there any ... See more...
If I had logs for the `_internal` index and logs for a `linux_os` index on a Heavy Forwarder, does the HF prioritize the `linux_os` index data prior to the `_internal` data on the host? Is there any precedence for data Splunk is monitoring?  Does Indexers have a precedence for what kind of data to index first?
Hello, I am looking for a solution to send Splunk alerts to Splunk mobile application. So far I was using the "Splunk Cloud Gateway" splunkbase on my Splunk lab (standalone Splunk VM) which was bas... See more...
Hello, I am looking for a solution to send Splunk alerts to Splunk mobile application. So far I was using the "Splunk Cloud Gateway" splunkbase on my Splunk lab (standalone Splunk VM) which was based on Splunk 8.0.x. Since I wanted to upgrade recently to Splunk 8.2.4, I needed to also move to the "embedded" Splunk Secure Gateway app. Since I did not needed the former indexed data, I decided to remove Splunk 8.0 and do a fresh install of 8.2.4 (no upgrade on Splunk side nor migration from Cloud Gateway to Secure Gateway). After "opt-in" for Secure Gateway, the gateway managed to stay "connected" for a duration of ~10 minutes (I can see "ping-pong" messages in Secure Gateway logs/_internal index). But it stopped suddenly to work (status in dashboard is now desperately showing  "not connected") ... Last "ping-pong" exchange is the following one: This was "today morning " at 0:20 AM (twenty past midnight, 10 minutes after gateway optin/config). On the errors side, the first one ever I can see is this one (7 min before 0:20 AM): Then this one when it stopped the "ping-pong" traffic (at 0:20 AM):  And then such ones:   I've checked all the logs of the gateway, enabled DEBUG traces, analyzed the python code, checked these errors, changed the "timeouts" for bigger values in the app conf file, looked at the "Troubleshooting sections" of the doc ... but I could not find yet why it suddenly stopped to work. To be complete, I am running on a lab VM (2 vCPU, 8GB of RAM) (which is under the prereq "specs", I know) and with SSL self-sign certificate generated by Splunk when I changed the server settings to use HTTPS. I am behind a Sophos UTM 9.7 which is protecting my home network and I've made a rule to disable filtering (like SSL scanning etc) for URLs that ends by *.spl.mobi  Would you have any directions or clues for fixing that connectivity issue? Thanks in advance   
Hi,  We are trying to pull information from some of the database tables in ServiceNow into our Splunk Enterprise environment using the add-on, but since the tables are fairly heavy, we aren't able... See more...
Hi,  We are trying to pull information from some of the database tables in ServiceNow into our Splunk Enterprise environment using the add-on, but since the tables are fairly heavy, we aren't able to successfully get it all working as some of the tables end up with the following error message: 2022-02-10 09:08:31,159 ERROR pid=12171 tid=Thread-20 file=snow_data_loader.py:collect_data:181 | Failure occurred while getting records for the table: syslog_transaction from https://---.net/. The reason for failure= {'message': 'Transaction cancelled: maximum execution time exceeded', 'detail': 'maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information. Now, I was told by the ServiceNow support that we might be able to prevent that from happening (and hence, get it successfully going) by introducing query parameters. Has anyone experience on how to configure the add-on to comply with that? As a reference, the ServiceNow support sent me this: " I'm not familiar with the configuration options for the Splunk addon. However if you would like your API requests to take shorter time I would suggest that you limit the number of records you are fetching per request, use pagination and also limit the number of columns you are selecting. a). You can implement pagination by using the URL parameter sysparm_offset. As an example in the initial request you can configure sysparm_offset=0&sysparm_limit=100, then on the next call you will increment the offset by 100 to sysparm_offset=100&sysparm_limit=100. You will need to keep on incrementing the offset after each response until you reach the limit of 25000. b). In order for you to limit the number of columns you will need to use the URL parameter sysparm_fields. For example if you only require the task number and short description you will configure the URL parameter as sysparm_fields=number,short_description&sysparm_limit=100. Below is an example of a complete URL with both sysparm_fields and sysparm_offset configured. api/now/table/task?sysparm_limit=100&sysparm_query=ORDERBYDESCsys_created_on&sysparm_fields=number,short_description&sysparm_offset=0 " Does anyone have an idea on how to proceed to better get it working? Any ideas/suggestions would be really helpful. Thanks, Artelia
Hello guys!! I have a question about the lookup command when the lookup file contains strings and regular expressions. The following is an example. field var_1 : String field var_2 : String f... See more...
Hello guys!! I have a question about the lookup command when the lookup file contains strings and regular expressions. The following is an example. field var_1 : String field var_2 : String field var_3 : Regex or String field var_4 : String lookup file ------lookup file----------------------------- var_1, var_2, var_3, var_4 data10, data11, .+(:?aaa|bbb), data13 data20, data21, .+(:?ccc|ddd|eee), data23 data30, data31, .+(:?eee)fff+(:?ggg|hhh), data33 -------------------------------------------------- I would like to return var_4 when var_1, var_2, and var_3 are matched by the lookup command, but var_3 may contain a regular expression, and the lookup needs to match the condition of the regular expression. As you know, regular expressions are not allowed in the lookup-field in the lookup command. ↓↓↓ Regular expressions cannot be used ↓↓↓ | makeresults | eval var_1 = "data10", var_2 = "data11" , var_3 = "ABC123aaa" | lookup var_1 var_2 var_3 OUTPUT var_4 It is necessary to use the lookup file (csv). If the lookup command is not the best way to solve this problem, then another command such as join is fine to use. Obviously, I don’t intend to use only the lookup command. I’m looking for other ways to do it as well. Can someone please help me with this? Thanks in advance!!
I need to filter different error values for a range of different instruments. To do this, I have created a macro and lookup that uses the host-field and the name of the measurement field to determine... See more...
I need to filter different error values for a range of different instruments. To do this, I have created a macro and lookup that uses the host-field and the name of the measurement field to determine if the value should be removed or not. This part of the function works well, but in some cases, we also need to correct the measurements in different time periods due to calibrations etc. For those cases, I have created columns in the lookup named "case_<number>" which contains time_start, time_stop, value_to_remove, adjustment_value. An example would be host=extensometer_001 with a distance_mm-field where we need to correct the following measurements: - Remove -222 between unix time 1644546240 and 1644586240 - Adjust +30 for measurements between unix time 1641546240 and 1644566200 The case-columns in the lookup would then look like this: case_001=1644546240,1644586240,-222 case_002=1641546240,1644566200,,30 To be able to handle zero or more cases per host and field, I use foreach in the following way:   | foreach "case_*" [| makemv delim="," <<FIELD>> | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND like(distance_mm,mvindex(<<FIELD>>,2)), "NULL", distance_mm) | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND mvindex(<<FIELD>>,3)!="",distance_mm+tonumber(mvindex(<<FIELD>>,3)),distance_mm) ]   The problem is that when I looka at "All time" and graph distance_mm with timechart at the end of the search, I end up seeing empty buckets all the way back to the first event indexed in the index (even if the data in my search is not that old). If I remove the foreach section, the problem goes away. I cannot see what is happening that makes timechart show this period without data. The interesting thing is that if look at the data in "Statistics" view right before the timechart then it only shows the time period with data. It is only when the timechart command is run that the empty buckets appear.   Image of results with foreach: Image of results without foreach: Does anyone know what is going wrong here or how in the worst case to get around it? (I could use cont=false to make Splunk zoom into the area where there is data, but then I would not be able to choose "show gaps" where data is missing which is a requirement from the client.) Full search:   | tstats summariesonly=false allow_old_summaries=false avg("Extensometer.distance_mm") as distance_mm FROM datamodel=Extensometer WHERE sourcetype="EXT" BY host, sourcetype, _time span=60min | eval field="distance_mm" | lookup error-filtering.csv instrument as host field as field | foreach "case_*" [| makemv delim="," <<FIELD>> | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND like(distance_mm,mvindex(<<FIELD>>,2)), "NULL", distance_mm) | eval distance_mm= if(_time > mvindex(<<FIELD>>,0) AND _time < mvindex(<<FIELD>>,1) AND mvindex(<<FIELD>>,3)!="",distance_mm+tonumber(mvindex(<<FIELD>>,3)),distance_mm) ] | streamstats window=2 earliest(distance_mm) as earliest_distance_mm latest(distance_mm) as latest_distance_mm by host | eval change_distance_mm=(latest_distance_mm - earliest_distance_mm) | streamstats sum(change_distance_mm) as acc_change_distance_mm by host | timechart span=1w limit=0 eval(round(avg(acc_change_distance_mm),2)) as distance_mm by host  
hi I try to display percent in my bar chart like this but it doesnt works   | chart count as total over sig_application by sig_transaction | eval total=0 | foreach count* [ eval total=to... See more...
hi I try to display percent in my bar chart like this but it doesnt works   | chart count as total over sig_application by sig_transaction | eval total=0 | foreach count* [ eval total=total + <<FIELD>>] | foreach count* [ eval <<FIELD>>=round((<<FIELD>>/total)*100,1)] | fields - total   is anybody can help please?