All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Before I go and re-invent the wheel, has anyone looked at indexing the results from the running an inspect using the CLI version of splunk-appinspect? The --output-file is, by default, JSON and has ... See more...
Before I go and re-invent the wheel, has anyone looked at indexing the results from the running an inspect using the CLI version of splunk-appinspect? The --output-file is, by default, JSON and has a start_time field in it which could be used for the event's _time. And, if you run it with --generate-feedback, then you get a YAML file which can be converted to JSON using the yq command.  The result JSON file also has a start_time field in it which could be used for the event's _time. As for a use-case... I don't know (yet).  At this stage, it's really just a wouldn't it be cool to ...
Hi all, can anyone help me where to retrieve my x-api-key? My application is issuing otel-collector_1 | 2021-09-02T05:09:31.554Z info exporterhelper/queued_retry.go:325 Exporting faile... See more...
Hi all, can anyone help me where to retrieve my x-api-key? My application is issuing otel-collector_1 | 2021-09-02T05:09:31.554Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "otlphttp", "error": "error exporting items, request to https://pdx-sls-agent-api.saas.appdynamics.com/v1/traces responded with HTTP Status Code 403", "interval": "15.823926909s"} i got mine from API Clients. is this the right place?
Hi All, One of our indexer is going down very frequently and i have observed this below error in the dmesg logs  Out of memory: Kill process 20910 (splunkd) score 801 or sacrifice child  Killed pr... See more...
Hi All, One of our indexer is going down very frequently and i have observed this below error in the dmesg logs  Out of memory: Kill process 20910 (splunkd) score 801 or sacrifice child  Killed process 20914 (splunkd) total-vm:86320kB, anon-rss:9872kB, file-rss:0kB, shmem-res:0kB  splunkd: page allocation failure: order:2, mode:0x35600d0  CPU: 2 PID: 20914 Comm: splunkd Not tainted 3.10.0-693.11.6.el7.x86_64 #1 Can you please help me on this issue Thank you
Hi, I am moving a client from onprem to cloud. One of the apps they use is TA-windows_eventsize_reducer (app number 3500). On Splunkbase that app is marked as Splunk Cloud compatible but it is als... See more...
Hi, I am moving a client from onprem to cloud. One of the apps they use is TA-windows_eventsize_reducer (app number 3500). On Splunkbase that app is marked as Splunk Cloud compatible but it is also marked as Archived. Any idea why it has been archived? Also how can I install it to Splunk Cloud?  The usual "Manage Apps > Browse More Apps > Install" doesnt work beause it canty find the app - probably because it is inactive.  I tried downloading then installing from file but got rejected and told to use the vetting approach. Should I vet it or should it be loadable from Manage App?   Thanks
Hi, I'm having an odd issue. I made some field extractions and validated them through Regex101. However only some of the fields are being extracted, not all. Initially they all work and then some di... See more...
Hi, I'm having an odd issue. I made some field extractions and validated them through Regex101. However only some of the fields are being extracted, not all. Initially they all work and then some disappear. Its a single Regex string so if there were any issues I don't know why other fields would be extracting but not others. And the sourcetype has not changed. Does anyone have a solution for this or any inkling of what might be going on? For reference here's my regex: "log":\s"(?<log_source>[^\s]+)\s(?<ISO8601>[^\s+]+)\s+(?<log_level>[^\]]+)\s\[(?<exchangeId>[^\]]+)\]\s(?<RuleType>[^\.]+)\.\[(?<RuleName>[^\]]+)\]\s-\s(?<http_method>[^\|]+)\|(?<site>[^\|]+)\|(?<uri_path>[^\s\?"|]++)\|(?<status>[^\|]+)\|{\\"error_description\\":\\"(?<error_description>[^"]+)\\\",\\"error\\":\\"(?<error>[^\\]+)\\"}\\n Log: "log": "/opt/instance/log/access.log 2021-09-01T14:40:17,493 WARN [wUJHboi800nOHINLKnugbF1rBkcQ] Rule.[ErrorCapture] - POST|site.com|/oauth|400|{\"error_description\":\"Authorization code is invalid or expired.\",\"error\":\"invalid_grant\"}\n" And it seems to only be the site field not extracting for whatever reason
This is my splunk query   index=xxxxx "searchTerm")|rex "someterm(?<errortype>)" | timechart count by errortype span ="1w" | addcoltotals labelfield=total | fillnullvalue=TOTAL|fileds - abc,def,to... See more...
This is my splunk query   index=xxxxx "searchTerm")|rex "someterm(?<errortype>)" | timechart count by errortype span ="1w" | addcoltotals labelfield=total | fillnullvalue=TOTAL|fileds - abc,def,total   I am adding the total count of the errors over a week in another column named TOTAL as depicted in table below.Here A... B... are error names in alphabetical order, the values are total number of errors that occured on that day for that errortype _time                      A....     A....     C....     D....     E.... 2021-08-25       11         22      05      23      89 2021-08-26        15       45        45      13      39 2021-08-27        34       05        55       33     85 2021-08-28        56       08        65       53      09 2021-08-29         01       06        95      36       01 TOTAL                  117        86       265  158    223 I want these fields sorted by value in TOTAL row in descending order like 265   223 1 58  117  86 But i am always getting this in alphabetical order of the errortype like A... A... B... how can i improve this query to get the sorted result like i want?
Hi I'm trying to use ITSI to use KPI's from IIS servers. The setup of the IIS web servers is they host several different sites and in ITSI I want to break this out into different services. Splunk i... See more...
Hi I'm trying to use ITSI to use KPI's from IIS servers. The setup of the IIS web servers is they host several different sites and in ITSI I want to break this out into different services. Splunk is ingesting the IIS logs successfully - the data includes the hostname of the server it's running on and the site name. In ITIS I've setup a new service. For the entities of this service I've made a rule to match both the alias fields 'host' and 'site' (and made sure both these fields are set on the Entity record in ITSI). Then I setup a new KPI using a base search to count the number of 5xx errors. This is set to split by the field 'host' - the website is hosted from two separate machines. Then filtered by service entities in field 'site'. This seemed to work until I started creating other services for other websites. I wanted to also monitor the non-production version of this website. So I created a service as above but using the non-prod host names, however the site name is the same. The result of this was really weird: the KPI then listed the production and non-production servers in the entity list for this service (though they are not in the Entities list for that service). ITSI also started giving warnings of duplicate alias's assigned to entities. At this point I thought maybe I was defining the 'site' on the entity in the wrong way. So I moved site from being an alias to 'Info'. But unfortunately ITSI doesn't seem to be able to filter by the info field. I guess the issue I'm facing here is I need a way to filter an entity by two fields - the hostname of the server(s) it's on and the 'site' name in IIS. How is this archived? Thanks, Eddie
I’m attempting to determine how to identify the distribution of the Java Agent deployed on a server. I know that for the agents that support Java1.7 or lower you can unpack the javaagent.jar and loo... See more...
I’m attempting to determine how to identify the distribution of the Java Agent deployed on a server. I know that for the agents that support Java1.7 or lower you can unpack the javaagent.jar and look at /META-INF/MANIFEST.MF If it’s the IBM specific agent: Then Implementation-Version will read “Server IBM Agent #” Elseif it’s the Sun+JRocket: Then Implementation-Version will read “Server Agent #” However, how do you tell if it’s the Sun+JRocket that supports Java1.7 and lower, or the Sun+JRocket that supports Java1.8 or higher? I have unpacked javaagent.jar for both agents, ran a diff on the entire file structure and it came back with no results.
our splunk has retention time is 35 days only.after that we get "No result found " message on dashboard.We want set alert when on dashboard when we get " "No result found "  or over the retention tim... See more...
our splunk has retention time is 35 days only.after that we get "No result found " message on dashboard.We want set alert when on dashboard when we get " "No result found "  or over the retention time.How we can set the alert on front end?
I'm looking to update an artifact in a custom function. The closest thing that's supported is being able to update a container, or delete/add artifacts which is not what we want to do (as the initial... See more...
I'm looking to update an artifact in a custom function. The closest thing that's supported is being able to update a container, or delete/add artifacts which is not what we want to do (as the initial artifact must stay intact).  Is there any workaround for updating artifacts in a CF, or are there any plans to include update_artifact into the supported Custom Function API commands?
Our dashboard contain Heavy css,HTML and heavy  splunk quries.I want to improve the dashboard performance. 1. Is there other way to reduce loading time other than base search and Span? 2.Is using J... See more...
Our dashboard contain Heavy css,HTML and heavy  splunk quries.I want to improve the dashboard performance. 1. Is there other way to reduce loading time other than base search and Span? 2.Is using Javascript for css really helps to improve the performance?
How can I add new fields and/or rename existing fields to Global Account Settings which currently by default just have username/password inputs ? Something like client id, client secret etc.    ... See more...
How can I add new fields and/or rename existing fields to Global Account Settings which currently by default just have username/password inputs ? Something like client id, client secret etc.     I cannot add password/client secret as data input parameter as they get stored in plain text when add via system user interface(settings->data input)   I cannot make them global parameter either as we need to support multiple environment with each having different set of data. Any help would be appreciated. @splunk 
Please help me with an SPL to locate Corr. searches that are in trouble , not working right. For example missing a macro or so. Thank u very much in advance.
Hello Splunk Community, would you have any advice or recommendations on how to use Trumpet with an organizational CloudTrail? Our organization currently has individual CloudTrails deployed in each a... See more...
Hello Splunk Community, would you have any advice or recommendations on how to use Trumpet with an organizational CloudTrail? Our organization currently has individual CloudTrails deployed in each account but with the introduction of Control Tower this design would become redundant. Any advice on how to best deploy Trumpet with an Organizational CloudTrail is appreciated.
Hi, I have data as below sample: Date Time val1 val2 val3 ...... 21/08/31 01:00:00 2 1 2 2 2 2 2 1 1 2 69 1 0 2 0 0 3 3 21/08/31 02:00:00 1 1 0 1 1 1 0 0 0 0 0 0 0 1 0 1 1 0 21/08/31 03:00:00 2 ... See more...
Hi, I have data as below sample: Date Time val1 val2 val3 ...... 21/08/31 01:00:00 2 1 2 2 2 2 2 1 1 2 69 1 0 2 0 0 3 3 21/08/31 02:00:00 1 1 0 1 1 1 0 0 0 0 0 0 0 1 0 1 1 0 21/08/31 03:00:00 2 1 1 2 2 2 0 1 0 2 1 0 0 2 0 1 2 2 21/08/31 04:00:00 1 1 1 1 1 1 67 0 1 150 205 0 169 312 0 0 2 2 21/08/31 05:00:00 1 0 1 1 1 1 0 0 0 70 1 2 0 1 1 1 2 58 I can calculate the max value for a specific date and time and show as a single value panel on a dashboard. What I'd like to do it find the max value for the latest time reported in the data for a date. index=my_index sourcetype=my:sourcetype Date="21/08/31" Time="03:00:00"| eval max_val = max(val1, val2, val3, val4 ....) |stats max(max_val) as mymax So in the sample where latest Time is "05:00:00" is there a way I can code that rather than hard specify the value? thanks in advance for any thoughts
Hi team,    I am creating a query to fetch a unique id from different events which are having different statuses.  If two log events are having same unique id and with status="START" & status="END"... See more...
Hi team,    I am creating a query to fetch a unique id from different events which are having different statuses.  If two log events are having same unique id and with status="START" & status="END" then that application has completed 1 success iteration or else it should be error.  I created one query can't understand how to compare the 'correlationId' from different events.  Can anyone please help with the query to compare the 'correlationId' from different events along with below query. >>  index="dev" | rex "\"Status\\\\\"\s:\s\\\\\"(?<Status>[^\\\]+)" | stats count by applicationName,Status|where Status in("START","END") Below are the logs for 'Start' & 'End' events.   log: [2021-09-01 04:14:10.216] INFO api [[PythonRuntime].uber.12772: [tyt-autoencoding-dev].get-terms-from-oc/processors/1.ps.BLOCKING @f089563] [event: 80961137-6734-4f7f-8750-3d27cdf2a4eb]: { "correlationId" : "80961137-6734-4f7f-8750-3d27cdf2a4eb", "Status" : "START", "priority" : "INFO", "category" : "com.tayota.api", "elapsed" : 0, "timestamp" : "2021-09-01T04:14:10.215Z", "applicationName" : "Toyato Encoding API", "applicationVersion" : "v1", "environment" : "Development", } log: [2021-09-01 04:14:10.216] INFO api [[PythonRuntime].uber.12772: [tyt-autoencoding-dev].get-terms-from-oc/processors/1.ps.BLOCKING @f089563] [event: 80961137-6734-4f7f-8750-3d27cdf2a4eb]: { "correlationId" : "80961137-6734-4f7f-8750-3d27cdf2a4eb", "Status" : "END", "priority" : "INFO", "category" : "com.tayota.api", "elapsed" : 0, "timestamp" : "2021-09-01T04:14:10.215Z", "applicationName" : "Toyato Encoding API", "applicationVersion" : "v1", "environment" : "Development", } Thanks in advance.
Hi, I tried the below logic, to replace the "no results found "message with a custom message. But after adding append count the,  at end of the query. logic is not working as expected. Can anyone ... See more...
Hi, I tried the below logic, to replace the "no results found "message with a custom message. But after adding append count the,  at end of the query. logic is not working as expected. Can anyone please help? base search | fields version, time  | appendpipe [stats count | where count=0] <done> <condition match=" 'job.resultCount' == 0"> <set token="show_html">true</set> </condition> <condition> <unset token="show_html"/> </condition> <done> <chart rejects="$show_html$"> ... </chart> <html depends="$show_token$"> <div style="font-weight:bold;font-size:150%;text-align:center;color:red"> No data, Please check later </div> </html>  
Hey everyone! I'm in the process of investigating a Splunk instance that I have inherited. I've got a decent handle on things, but I am seeing that the majority of our index is being eaten up by log... See more...
Hey everyone! I'm in the process of investigating a Splunk instance that I have inherited. I've got a decent handle on things, but I am seeing that the majority of our index is being eaten up by logs from our multiple Active Directory controllers. Digging around, I see that the local inputs.conf file for the universal forwarder on the DCs is empty, and btool confirms they are not pulling in config from other places. There is, however, a deploymentclient.conf file, with a single targetUri in it. What's interesting, though, is that the listed TargetUri is not a server name that is present in our environment. It's close, but not exact. Further, I see no signs that this particular domain controller has ever checked in with our deployment server. I know for a fact that we manually installed the Universal Forwarder on the domain controller. I also know that the correct Deployment Server and Indexer were provided at install time. So what might have caused the targetUri to change? I'm thinking it may be something in the deployment server itself, but I don't know where to look for that setting or how the deployment server might have updated it. I'm still getting my head wrapped around just what the deployment server itself is doing, in fact. But I am worried that with a full throttle, out of the box universal forwarder, we are likely collecting way more information than we actually want.  
This posting did not let me share the search string due to it containing HTML code etc. Any advice is appreciated. Thank u 
Hello Splunkers!       I wanted to ask if anyone out there has some SPL that I can use as an alert to detect failed and successful logins detected that are !=United States?  Thank you for your help... See more...
Hello Splunkers!       I wanted to ask if anyone out there has some SPL that I can use as an alert to detect failed and successful logins detected that are !=United States?  Thank you for your help!