All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am an occasional splunk developer.  Its seems like maybe once a year management ask me to do something with Splunk so I don't get to use Splunk enough to be proficient at all.  Today I got ass... See more...
Hi, I am an occasional splunk developer.  Its seems like maybe once a year management ask me to do something with Splunk so I don't get to use Splunk enough to be proficient at all.  Today I got assigned a task where management wants a particular report posted to a slack channel.  After about 2 hours of doing web searches and reading on line documentation I still have no clue how I would go about getting a link to this Splunk report posted to a slack channel.    Please any help will be appreciated. 
Hello, I have observed that the "top" command seems to calculate wrong percentage values if used on a multivalue field, i.e. a field which may contain multiple values. Example: If I run the followin... See more...
Hello, I have observed that the "top" command seems to calculate wrong percentage values if used on a multivalue field, i.e. a field which may contain multiple values. Example: If I run the following search:   | makeresults | eval test="multivalue1,multivalue2|singlevalue" | eval test = split(test, "|") | mvexpand test | eval test = split(test, ",") | top test   I get the following result: test count percent singlevalue 1 50.000000 multivalue2 1 50.000000 multivalue1 1 50.000000   Which seems wrong, because the sum of the "percent" values is 150%. It seems like Splunk's "top" command expands the input search, which consists of 2 entries, to 3 entries, which it outputs. But the percentages are being calculated using the original 2 entries, i.e. somehow like <count> / <number of input search entries>, the latter being 2 here. Shouldn't the percentages rather be calculated as <count> / <number of expanded search entries>, the latter being the correct 3 here? If I modify the test query so it expands the multivalue fields before the top command, the result is as expected:   | makeresults | eval test="multivalue1,multivalue2|singlevalue" | eval test = split(test, "|") | mvexpand test | eval test = split(test, ",") | mvexpand test | top test   test count percent singlevalue 1 33.333333 multivalue2 1 33.333333 multivalue1 1 33.333333   My question: Is this a bug or a feature? If the former: Should I report it?  
We are setting up the SSL certificate monitor extension and find the execution frequency is way too short. There is no reason to check for something that has days as a time frame. Certificate expirat... See more...
We are setting up the SSL certificate monitor extension and find the execution frequency is way too short. There is no reason to check for something that has days as a time frame. Certificate expiration occurs on a given day, so to check it more than once a day does not make sense. Is there any way to increase the execution frequency to something like a day (86400 seconds)?
We have applications running on OpenShift platform (v 3.11) and logs are written to STDOUT.  We have setup Splunk Connect for Kubernetes to forward logs from OpenShift to Splunk instance. Setup is wo... See more...
We have applications running on OpenShift platform (v 3.11) and logs are written to STDOUT.  We have setup Splunk Connect for Kubernetes to forward logs from OpenShift to Splunk instance. Setup is working and we are able to see and search logs in Splunk but the issue we are facing is that all lines in Application logs are displayed as individual events in Splunk search, for example if there is some json string in logs then each value in json output is displayed as individual events as shown in below screenshot. Here is the raw log:         "sessionInfo" : { "userId" : "", "sessionId" : "bK7xzM16bpLXvaUGaWIODThJm9A", "ipAddress" : "", "endpoint" : "" } }          I suspect some intermediary component between Openshift and splunk is doing this wrapping which might be throwing off the parser, that's my take but I'm not entirely sure about how Splunk Connect for Kubernetes is handling this. Any help/suggestions in fixing is greatly appreciated. Thanks in advance!   Note: Similar app logs which are forwarded to Splunk instance using universal forwarder from linux machine are displayed in correct format in Splunk search.
hi,  I am looking to convert the following time to UTC format: 8/26/20203:47PM-06:00  Ultimately i am looking to convert the above format to work week of the year with Monday as start day. Any he... See more...
hi,  I am looking to convert the following time to UTC format: 8/26/20203:47PM-06:00  Ultimately i am looking to convert the above format to work week of the year with Monday as start day. Any help is appreciated.
hi, I have a string int the following format: msg: Logging interaction event { eventId: '12dea8c0-dfb2-4988-9e97-314dd6243918', eventAction: 'Failed', eventType: '123event', eventSubtype: '1234event... See more...
hi, I have a string int the following format: msg: Logging interaction event { eventId: '12dea8c0-dfb2-4988-9e97-314dd6243918', eventAction: 'Failed', eventType: '123event', eventSubtype: '1234eventsub', domainName: 'common', appName: 'authentication', containerName: 'root', containerVersion: '0.0.973' } i am unable to extract eventType and eventSubtype   because of text "Logging interaction event" how cna i get rid of this text and extract these fields    
In Alert Manager, is it possible to configure emails in such a way that the "Resolved" email has subject different from the "Firing" email? If so, could anybody provide an example of such configurat... See more...
In Alert Manager, is it possible to configure emails in such a way that the "Resolved" email has subject different from the "Firing" email? If so, could anybody provide an example of such configuration?
Hi all, I've succeeded in enabling the button "Create New Dashboard" only for the admin user. But if I want to enable it for all the users in admin Roles?  I'm trying to work with roles:this.collec... See more...
Hi all, I've succeeded in enabling the button "Create New Dashboard" only for the admin user. But if I want to enable it for all the users in admin Roles?  I'm trying to work with roles:this.collection.roles this.roles.id = /services/authentication/Roles/admin  but nothing..  any suggestions ? Thanks Fabrizio
Hello, Our environment has this linux server that continually get's hit with Brute force attacks. I am trying to figure out where they are coming from. Since our servers are behind a nated firewall ... See more...
Hello, Our environment has this linux server that continually get's hit with Brute force attacks. I am trying to figure out where they are coming from. Since our servers are behind a nated firewall I need to be able to see the failed logon attempts and match the server IP , port number  to the DestinationIP and natSourcePort I am trying to use sub search. I have one search that searches all of our indexes for failed passwords from server IP and I want it to return the portnumber Then do a second search that matches the IP and Port numbers. of the first search and return the top Source addresses    Here is what I have so far. Any help would be apricated  index="palo" ( PA_natDestinationIP=Server IP ANDPortNumber=PA_natSource_Port ) [ search index= "*" Failed password  IP = ServerIP | return PortNumber ] | stats count by PA_SourceAddress |sort by count desc
We have correctly reporting Universal Forwarder agents running (Windows in this case) but whenever a local disk of the server that the agent is running on reached 100% occupancy (for a little while),... See more...
We have correctly reporting Universal Forwarder agents running (Windows in this case) but whenever a local disk of the server that the agent is running on reached 100% occupancy (for a little while), there is no longer data coming in from the UF agent. If you look on the local server with Performance Monitor (LogicalDisk\% Free Space) and check the full disk in question, performance monitor shows 0.000. But in Splunk no data (not even that 0.000, see picture on the right side) is coming in anymore and our dashboard graphs that show disk occupancy turn blank as data stops flowing in (see picture on the left side). When you create space on the disk, even if it's still 99% filled, data starts flowing in again. How can one work around this in Splunk, so when no data comes in where previously it was 99%, Splunk shows 100% instead of nothing at all... This is the SPL in question (see bottom of picture for table output) index="uf_basickpi" source="Perfmon:LogicalDisk" counter="% Free Space" instance!=HarddiskVolume* instance!=_Total host=SERVERNAME | lookup resource_thresholds.csv resource_name as host, resource_metric as counter, resource_disk_instance as instance output resource_threshold_warning, resource_threshold_critical | eval spaceFree=round(Value,0) | eval spaceUsed=100-spaceFree | timechart span=5m avg(spaceUsed) as "% Space Used", latest(resource_threshold_warning) as "Warning", latest(resource_threshold_critical) as "Critical" avg(spaceFree) as "% Space Free" by instance  
Hello Folks I have a scripted input using -passAuth in my inputs.conf. However i've been testing the script on a *nix vm and it worked fine. (splunk 8.1). To double check the passAuth variable i p... See more...
Hello Folks I have a scripted input using -passAuth in my inputs.conf. However i've been testing the script on a *nix vm and it worked fine. (splunk 8.1). To double check the passAuth variable i printed it out to a file, copied the token and ran the following:   splunk backup kvstore -token $token_from_scripted_input   Or any command that requires auth and it worked like a charm. Doing the same now on windows and splunk 7.3.3 does not work at all as splunk keeps asking for username/pw when im using the -token option. Any ideas?
I have not been able to find much information on configuring DB Connect in Splunk Cloud. I've seen that people have installed the app on a heavy forwarder to send data to Splunk Cloud, but I'm thinki... See more...
I have not been able to find much information on configuring DB Connect in Splunk Cloud. I've seen that people have installed the app on a heavy forwarder to send data to Splunk Cloud, but I'm thinking that was before it was compatible with Splunk Cloud because I see it as an available app to install in Splunk Cloud.   Has anyone set it up in this manner? How would I go about configuring the connection between our on premise servers and Splunk Cloud? I'm thinking if we had to install it on a heavy forwarder we would end up losing some of the features offered in DB Connect like dbxquery for dashboards.
Input Dropdown , I am using <selectFirstChoice>true</selectFirstChoice> to get the first value for dynamic as default value in dropdown , but i have static values also in dropdown which are getting P... See more...
Input Dropdown , I am using <selectFirstChoice>true</selectFirstChoice> to get the first value for dynamic as default value in dropdown , but i have static values also in dropdown which are getting Preference as first and are preloaded value  in dropdown .  Is there any way that default values should only  be dynamic .
Hi, I'm brand new to Splunk coming from a background using Prometheus metrics. I've been reading through Splunk docs but I'm struggling to get some of the basics down. I was looking at https://docs.... See more...
Hi, I'm brand new to Splunk coming from a background using Prometheus metrics. I've been reading through Splunk docs but I'm struggling to get some of the basics down. I was looking at https://docs.splunk.com/Documentation/Splunk/8.1.1/Metrics/GetMetricsInOther which talks about HTTP/JSON but is this saying my application would offer an HTTP endpoint for polling/pulling, or my application needs to call an endpoint on the Splunk server? I've always worked using the poll paradigm before and one thing I can't seem to find are examples of how to instrument metrics in my own code. In particular, 90% of our system (which comprises dozens of modules) is written in C++ and I would really love to see a code example of instrumenting and exposing a metric (e.g. number of calls to MyMethod() in C++, for consumption by Splunk Enterprise. Many thanks for any help.  
I have a single visualization and line chart dashbord panels (there are more panels as well) which are linked to shared time range picker. In my case data is loaded into splunk after every 1 minute. ... See more...
I have a single visualization and line chart dashbord panels (there are more panels as well) which are linked to shared time range picker. In my case data is loaded into splunk after every 1 minute. How can I configure 'latest' time of these panels to snap to 1 min if I select Last 15 mins (earliest=now) from presets in time range picker so that it doesn't display '0' in panels?  On selecting last 15 minutes from presets in shared time range picker 0 is displayed in single value and line chart visualization       index=index1 sourcetype=sourcetype_name | timechart span=1min sum(call_rate)      
HI there I added the tested data as admin user and then logged out to sign in as poweruser. but i cant see the data there?  Please advise. thanks.
Hi all, I have a use case to transform gzipped binary portion of HTTP ResponseCode into readable content. Is this something that's possible to do, and if not what's the best workaround? Example att... See more...
Hi all, I have a use case to transform gzipped binary portion of HTTP ResponseCode into readable content. Is this something that's possible to do, and if not what's the best workaround? Example attached Many thanks
Hi All, I am trying to display panel-One when selected One from the dropdown option, and panel-Two when selected Two from the dropdown option, and so on. It was working perfectly fine until last wee... See more...
Hi All, I am trying to display panel-One when selected One from the dropdown option, and panel-Two when selected Two from the dropdown option, and so on. It was working perfectly fine until last week, but now the issue is: When i select dropdown option One, no panel is displayed, when i select Two, Panel-One is displayed, and later when i select Three, the last selected dropdown option panel is selected, ie. Two. Can someone urgently help please? My dashboard is completely messed up now, and I cant figure out where the actual problem is. Many thanks in advance
Hello, Can we start and stop our controller in the SAAS environment? If possible then please guide me through the process. ^ Edited by @Ryan.Paredez to improve the title. 
Hi, got a problem with db connect and rising column checkpoint value. The only rising column available to me is a datetime column with millisecond precision. I tried first just using this as it i... See more...
Hi, got a problem with db connect and rising column checkpoint value. The only rising column available to me is a datetime column with millisecond precision. I tried first just using this as it is, but i did not work over time as db connect stopped indexing with an error saying that it failed to convert nvarchar to datetime2. After a bit of researching the forums here, I found out that Db connect uses simple date format that has issues with milliseconds. The possible issue with simple date format made me decide to create a new column based on the LastUpdate datetime column, converting all datetimes to epoch time(bigInt). This works the first time Splunk ingests the data, but the next rising column checkpoint value is set to 20210105075923 = Friday 8. June 2610 06:04:35.923 after first ingestion The highest value in the database table is 1609832657526 = Tuesday 5. January 2021 07:44:17.526 Anyone have any idea why this happens, and how to fix it? Edit: I can see that the checkpoint value that splunk puts in corresponds to the date, or atleast it seems so. But it is not a epoch time, can it be that it still thinks I am using datetime from previous setting? Edit2: Creating a new input using the same checkpoint field seems to have solved my issue.