All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I have a query that executes in my dashboard. I want to provide the input as a CSV file(with list of IDs) and execute the query? Could you please help me on how to do that? Currently my inpu... See more...
Hi Team, I have a query that executes in my dashboard. I want to provide the input as a CSV file(with list of IDs) and execute the query? Could you please help me on how to do that? Currently my input : "5741242" My query : (below) index="amp" (application="create-order" ) "5741242" | rex field=message "(?msi)(?\{.+\})" | spath input=json_message output=externalReferenceId path=correlationId | spath message | rex field=message "\"name\":\"(?(.[^\"]+))" | spath message | rex field=message "\"externalId\":\"(?(.[^\"]+))" | spath input=json_message output=OrderStatus path=data.version | table externalReferenceId, _time,customername,OrderID,OrderStatus,BookingId,AppointmentId Thanks in advance! Daniel Joseph
Hi there, came across an issue on our HF's when trying to install a Splunkbase application called WebTools. Suddenly all my apps installed on the heavy forwarers no longer want to display the input... See more...
Hi there, came across an issue on our HF's when trying to install a Splunkbase application called WebTools. Suddenly all my apps installed on the heavy forwarers no longer want to display the input pages of many Splunk and non-Splunk Add-Ins. The common pattern in weblogs seems to be :   2020-12-08 18:40:07,506 ERROR [5fd00ed77f7f53e84e3810] error:335 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 684, in _do_respond self.hooks.run('before_handler') File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 114, in run raise exc File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 104, in run hook() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 63, in __call__ return self.callback(**self.kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cptools.py", line 182, in _wrapper if self.callable(**kwargs): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/customstaticdir.py", line 54, in custom_staticdir filename = resolver(section, branch, dir) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/root.py", line 283, in static_resolver return static_app_resolver(section, branch, static_app_dir) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/root.py", line 269, in static_app_resolver i18n_cache = i18n.translate_js(fn) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/i18n.py", line 908, in translate_js f.write(filename_contents)   UnicodeEncodeError: 'latin-1' codec can't encode character '\u2019' in position 37632: ordinal not in range(256)   I did some test to play around with the locales of the Splunk user but this did not resolve the issue. Any ideas what to try next ? I also opened a ticket with Splunk.  
is it possibly to edit my Monitors:// to work with specific hostnames (Computer Names) and monitor a specific file location for those hostnames only?   I will be editing the UF on a Domain Controll... See more...
is it possibly to edit my Monitors:// to work with specific hostnames (Computer Names) and monitor a specific file location for those hostnames only?   I will be editing the UF on a Domain Controller, so there will be multiple hostnames checking in and I just want to monitor a specific file path for specific hostnames and not for others on that Domain Controller
12-08-2020 21:54:50.912 +0000 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/bitglass/bin/logeventdaemon.py" /opt/splunk/bin/python3.7: can't open file '/opt/splun... See more...
12-08-2020 21:54:50.912 +0000 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/bitglass/bin/logeventdaemon.py" /opt/splunk/bin/python3.7: can't open file '/opt/splunk/etc/apps/bitglass/bin/logeventdaemon.py': [Errno 2] No such file or directory host=idx-i-03...c2.customer.splunkcloud.com    source=/opt/splunk/var/log/splunk/splunkd.log    sourcetype=splunkd   The customer has installed from the cloud-certified .spl app. The file path is valid according to the customer.. Any hints appreciated Thanks
I'm trying to create a query that will provide me with events that use two indexes. The results are to show events where 2 consecutive emails were blocked (by a specific endpoint tool = index1) follo... See more...
I'm trying to create a query that will provide me with events that use two indexes. The results are to show events where 2 consecutive emails were blocked (by a specific endpoint tool = index1) followed by a successfully sent email (logged by another endpoint tool = index2).  event/log=((block#1and block#2) and successful sent email) I've been running into issues-this what I currently have: index=index1 field1=SMTP action=blocked | rex field=suid  "(?<UserName>.+?)@" | eval UserName=upper(UserName) | rex "fileName"=(?<attachments>.+)\s*fileHash=*+" | rex field=_raw "(?Subject>(?<=cs\=)(.*)(?=suid\=))" | rename suid AS Sender act as ACT | stats count by UserName | transaction endswith=datasource="index2" maxspan=30min Any help is appreciated, thanks.
Hi gurus, I am new to Splunk but have this task that I'm stumped on: I have a query that looks like this: index=pp_security_app_tenablenessus sourcetype="tenable:io:vuln" plugin.id="42981" | table... See more...
Hi gurus, I am new to Splunk but have this task that I'm stumped on: I have a query that looks like this: index=pp_security_app_tenablenessus sourcetype="tenable:io:vuln" plugin.id="42981" | table asset_fqdn, ipv4, port.port, port.protocol, plugin.synopsis, output One of the resulting fields is a field named "output". Inside the output field I will have data that looks like this: The SSL certificate will expire within 90 days, at Jan 29 12:00:00 2021 GMT : Subject : C=US, ST=California, L=San Jose, O=Jimmy's Bar & Grill, OU=Kitchen, CN=jimmysbarandgrill.com Issuer : C=US, O=DigiCert Inc, CN=DigiCert Global CA G2 Not valid before : Jan 29 00:00:00 2019 GMT Not valid after : Jan 29 12:00:00 2021 GMT Sometimes the Subject field only has the CN like this: The SSL certificate will expire within 90 days, at Jan 29 12:00:00 2021 GMT : Subject : CN=jimmysbarandgrill.com Issuer : C=US, O=DigiCert Inc, CN=DigiCert Global CA G2 Not valid before : Jan 29 00:00:00 2019 GMT Not valid after : Jan 29 12:00:00 2021 GMT I need to extract the the Common Name (anything after CN= till end of line or next comma) from both the Subject and Issuer sections of the output field, then extract the 'Not valid before' and 'Not valid after' sections of the output field after the colon and put the extracted data into fields named CommonName, Issuer, Before and After. Thanks in advance for your help.
Is there a way to delete an analytic story via the Splunk ES web interface?
I have about a dozen data sources that I want to monitor for an outage...  like>>> No Events in Last 60 minutes. Currently I have been using a separate alerts for each data source / index which run ... See more...
I have about a dozen data sources that I want to monitor for an outage...  like>>> No Events in Last 60 minutes. Currently I have been using a separate alerts for each data source / index which run every hour and alert if there are 1 < events. I am just wondering if there is a better way to do this....  I also have to contend with some sources having longer that 60 minute delay at times...   Thank you
I thought I would post this, hopefully I won't recreate the wheel if something cool already exists... I have a number of scheduled search /reports that I created for users, like 100 of them.  I am c... See more...
I thought I would post this, hopefully I won't recreate the wheel if something cool already exists... I have a number of scheduled search /reports that I created for users, like 100 of them.  I am constantly asked "why did my report not show up in my email, today like normal?"... I handled this by adding myself to the recipient list so I can monitor results and if the report ran.  However as you can imaging I am getting overwhelmed by the quantity of reports to keep track of. Ideally I would like a dashboard or a single report that would list all the reports and whether they ran or failed.  
Hi all,   I have an interesting problem I discovered. Recently, we migrated our Splunk Cluster to a different cluster hosted somewhere else. Since we use LDAP authentication , we need to migrate ov... See more...
Hi all,   I have an interesting problem I discovered. Recently, we migrated our Splunk Cluster to a different cluster hosted somewhere else. Since we use LDAP authentication , we need to migrate over User information as well as the LDAP strategies so that the user experience is not affected by the move. We copied over the authorize.conf, authentication.conf as well as the user folder for their KO. There were over 100 different users that we did this. We deployed the user folder using the new cluster's Deployer and we copied over the authorize.conf/authentication.conf manually to the system/local folder. We verified user access and various users were able to verify that they can login. However,  we  (the splunk Admins) realized that we cannot see these users logging in from the authentication endpoint. When we click the User tab under "Users and Authentication" in Settings, the GUI only shows that there are 10 users (including the admins). The rest endpoint ( |rest /services/authentication/users) also says the same thing.    So my question is,  where does Splunk store user information that it references when hitting the authentication endpoint ?  Is there any reason why copying over the User folder and authentication/authorization.conf was not enough?   Thank you!
The event contains a 'before' and 'after' list of permissions and users SIDs, I can get splunk to extract the entire 'before' list and the entire 'after' list but only as single events. but i need t... See more...
The event contains a 'before' and 'after' list of permissions and users SIDs, I can get splunk to extract the entire 'before' list and the entire 'after' list but only as single events. but i need to break it down to list  to indivudal Permission and SID   This it the entire event: 2020-12-07 22:45:51.123 91046 SUCCESS Domain\User Archive Permissions Archive 133481FD9531D0347BBCE92FFF45B4FE11110000evaultcol <Archive ArchiveID="133481FD9531D0347vaultcol" ArchiveName="Last, First"><OldManualSD> (A;;CCDCLCSWRPWPDT;;;S-1-5-21-299502267-1960408961-839522115-10875)(A;;CCSW;;;S-1-5-21-299502267-1960408961-839522115-2406856)(A;;CCSW;;;S-1-5-21-299502267-1960408961-839522115-2406857)</OldManualSD><NewManualSD> (A;;CCDCLCSWRPWPDT;;;S-1-5-21-299502267-1960408961-839522115-10875)(A;;CCSW;;;S-1-5-21-299502267-1960408961-839522115-2406856)(A;;CCSW;;;S-1-5-21-299502267-1960408961-839522115-2406857)(A;;CCDCSWRPDT;;;S-1-5-21-299502267-1960408961-839522115-3949157)</NewManualSD></Archive> ServerName The 'before' list is between the <OldManualSD> and <\OldManualSD> tags, the 'after' list is between the <NewManualSD> and </NewManualSD> tags The Permissions field is between the ;; and ;;; delimiters and is followed by the SID. There is a varying number of permsissons/SIDs in each event   Can get part way there; ex_OldManual_GP and ex_NewManual_GP fields extract from the "Info" field and the contain the before and after, but trying to get a second extraction based off ex_OldManual_GP and ex_NewManual_GP always fails    from the event above, I would like: OldManual = A;;CCDCLCSWRPWPDT;;;S-1-5-21-299502367-1960408961-839522117-10475 OldManual = A;;CCSW;;;S-1-5-21-299502367-1960408961-839522117-2406456 OldManual = A;;CCSW;;;S-1-5-21-299502367-1960408961-839522117-2406457 NewManual = A;;CCDCLCSWRPWPDT;;;S-1-5-21-299502367-1960408961-839522117-10875 NewManual = A;;CCSW;;;S-1-5-21-299502367-1960408961-839522117-2406456 NewManual = A;;CCSW;;;S-1-5-21-299502367-1960408961-839522117-2406457 NewManua l= A;;CCDCSWRPDT;;;S-1-5-21-299502367-1960408961-839522117-3949147 Any ideas?   my transforms.conf file: [ex_fields_extract] FIELDS = "AuditDate","AuditID","Status","UserName","CategoryName","SubCategoryName","ObjectID","Vault","info","MachineName" DELIMS = "\t" [ex_OldManual_GP] SOURCE_KEY = info REGEX=\>(<OldManualSD>D:)((?P<OldManual_GP>.*))(<\/OldManualSD>) [ex_NewManual_GP] SOURCE_KEY = info REGEX=\>(<NewManualSD>D:)((?P<NewManual_GP>.*))(<\/NewManualSD>) [ex_OldManual_MV] SOURCE_KEY = OldManual_GP REGEX=;;(?P<perm>\w+);;;* MV_ADD=true [ex_NewManual_MV] SOURCE_KEY = NewManual_GP REGEX=(?<NewManual>[^,]+),* MV_ADD=true   my props.conf file [exlogs] REPORT-ex_fields = ex_fields_extract REPORT-mvalue = ex_OldManual_MV, ex_NewManual_MV, ex_NewManual_GP, ex_OldManual_GP SHOULD_LINEMERGE = false  
Forgive my ignorance as I'm relatively new to Splunk. I'm currently hitting what I *think* is a data type issue, but I'm not quite sure how to proceed. We are using the Splunk add-on for Unix and Lin... See more...
Forgive my ignorance as I'm relatively new to Splunk. I'm currently hitting what I *think* is a data type issue, but I'm not quite sure how to proceed. We are using the Splunk add-on for Unix and Linux to return the set of 'df-metric' values. I would like to set up a simple alert on the metric_name:df_metric.UsePct value, alerting when the value exceeds 85%. I'm able to run this query and return data using an equality operator on that value:        index="linuxlogs" sourcetype="df_metric" host="ip-xxx-xx-xx-x" Filesystem = "/dev/xvda1" "metric_name:df_metric.UsePct"=8         ...however I'm NOT able to return data when perform an 'greater than' comparison on the metric_name:df_metric.UsePct value like this:        index="linuxlogs" sourcetype="df_metric" host="ip-xxx-xx-xx-x" Filesystem = "/dev/xvda1" "metric_name:df_metric.UsePct">8       Initially I tried manipulating the metric_name:df_metric.UsePct with the tonumber() function, thinking I was possibly receiving a string back, however that does not result in the data I would expect to see.  If anyone has guidance on traversing the data set returned by df_metric or any other points, I would appreciate it!  Thank you!  NOTE: I'm using 8  as a value for the metric_name:df_metric.UsePct only for testing purposes. This will, of course, need to be adjusted to 85 for the live alert. 
Hello Splunkers, Can you please guide me, my assignment_group column is not populating. Any issues i have done while creating the case statement. Please correct with that part.    
I'm trying to configure my indexes.conf to roll all db files based on time. Hot -> Warm (1 day) -> Cold (2 weeks) -> Frozen (6 months).  Now I know how to do the cold to frozen and frozen to thawed ... See more...
I'm trying to configure my indexes.conf to roll all db files based on time. Hot -> Warm (1 day) -> Cold (2 weeks) -> Frozen (6 months).  Now I know how to do the cold to frozen and frozen to thawed but I'm trying to figure out if I can do Hot to Warm to Cold based on time and not size. I found references to a work around with the following set up [main] maxHotBuckets = 3 maxHotSpanSecs = 86400 (1day) maxHotIdleSecs = 86400 maxWarmDBCount = 14 frozenTimePeriodinSecs = 15724800 (6 months) coldToFrozenDir = <path> thawedPath = <path> Will this work to roll buckets from hot to warm in 24 hours, then from warm to cold in 2 weeks? Does anyone see an issue with this?
I had this error when I upgraded from 8 to 8.1 and thought that my upgrade went wrong. I uninstalled my upgraded version and did a clean install of 8.1 but the error still persists whenever I just cl... See more...
I had this error when I upgraded from 8 to 8.1 and thought that my upgrade went wrong. I uninstalled my upgraded version and did a clean install of 8.1 but the error still persists whenever I just click on the "Find More Apps". I did the self signed certs as instructed in the documentation and updated all the configs (web.conf, server.conf, input.conf) but it pulls in a different certs which looks like a default one but i have no idea where this would be set or if it is something else as this is a clean install and has this error out of the box. Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.. Your Splunk instance is specifying custom CAs to trust using sslRootCAPath configuration in server.conf's [sslConfig] stanza. Make sure the CAs in the appsCA.pem (located under $SPLUNK_HOME/etc/auth/appsCA.pem) are included in the CAs specified by sslRootCAPath. To do this, append appsCA.pem to the file specified by the sslRootCAPath parameter.  I tried and followed the suggestion "appending appsCA.pem" etc but it's is not a solution. I am a little lost here, since all of the recommendations and suggestions didn't work.
Hello, I'm try go get "0" in my result when there is no events. I get only "no result found". index=*mysearch | timechart count as count | accum count as count Any idea?
Hi guys, I am trying to make a panel with multiple vertical graphs, where each of these graphs can have one or multiple lines of data. Let's say first graph could be desired/present voltage over tim... See more...
Hi guys, I am trying to make a panel with multiple vertical graphs, where each of these graphs can have one or multiple lines of data. Let's say first graph could be desired/present voltage over time, second desired/present current over time etc.  Basically I am looking for a way how to use multi-series mode so that there is not a graph for each line, but for merged ones. My desired output should look something like this:   Any ideas how to do that? Thanks for your answers.
I have a line chart in which I'm trying to monitor response time for a certain network call. I want to see the average response time, over time, by platform in a line chart. Input data looks somethi... See more...
I have a line chart in which I'm trying to monitor response time for a certain network call. I want to see the average response time, over time, by platform in a line chart. Input data looks something like this: network call # response time (ms) platform 1 200 web 2 250 android 3 300 web   140 ios   and my current query looks like this:   index=myindex | search mysearch | spath response_time | spath input=request_payload output=platform path=client_properties.platform | streamstats avg(response_time) as platform_response_time by platform time_window=10m | chart first(platform_response_time) over _time by platform     This is getting my pretty close, but theres something about it that isn't "right" : What can I do to make the line's... better? I don't even know how to phrase this, but there shouldn't be 0 values. The lines shouldn't be jumping up and backdown to 0 at every tick. They should be more "straight". The problem, I think, is that I'm creating a point for each interval of time, and there isn't a request for every platform at every interval.  Is there a way to group time intervals together in a longer period of time? i.e. there will only be a plot point for the average repsonse time each 5 minute interval? If there are truly 0 requests in 5m from a platform, that should be reflected, but it isn't likely and wouldn't happen so often.
hi all, I have an application that is divided to different tiers most of these tiers are doing the same functionality but have different loads. My question here is that if these tiers have the... See more...
hi all, I have an application that is divided to different tiers most of these tiers are doing the same functionality but have different loads. My question here is that if these tiers have the same business transactions will Appdynamics count them as separate business transactions for each tier or one transaction for multiple tiers?
Hi, facing issue with  data ingestion for the windows security events from the domain controller servers index=wineventlog source=WinEventLog:Security any suggestion, solution here?