All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, While testing smart store, I have a couple of questions. 1. What does cache size mean? As I understand, it is the storage size that hot buckets and warm buckets can take. Is it correct? 2.... See more...
Hi All, While testing smart store, I have a couple of questions. 1. What does cache size mean? As I understand, it is the storage size that hot buckets and warm buckets can take. Is it correct? 2. Let's say I set max_cache_size=1TB and there is only one index. When will warm bucket be evicted? Is it evicted at the time when max_cache_size is exceed?  otherwise, when buckets get older than hotlist_recency_secs? or both?  3. In case max_cache_size=1TB, Iet's say I do "All time" search, which leads to fetching warm buckets from remote storage. Due to the fetched buckets, max_cache_size will exceed. What happened when the search finish? Is all warm bucket going to be evicted? I read docs many times and tested by myself, but the bucket behavior is not clear. Would be appreciate if anybody answer for the questions above. Thanks.
Hi team, I would like to extract the following fields from vcenter logs that are being sent to Splunk on a dedicated port. Sample log as below: 2021-01-18T06:21:11.752139+00:00 test101 sshd[21656] ... See more...
Hi team, I would like to extract the following fields from vcenter logs that are being sent to Splunk on a dedicated port. Sample log as below: 2021-01-18T06:21:11.752139+00:00 test101 sshd[21656] Accepted password for root from 76.87.981.72 port 49881 ssh2 I am already using the Splunk_TA_vcenter from splunk_add_on_from_vmware but no luck in extraction. Need to extract the following fields: Field name    Field value app                    sshd user                   root src_ip               76.87.981.72 dest                   test101 action               success tag                      authentication  thanks in advance.
The configurations which was done in service availability module were deleted after the Machine agent went down(url monitoring was done from this machine agent). How it can be recovered now?. There w... See more...
The configurations which was done in service availability module were deleted after the Machine agent went down(url monitoring was done from this machine agent). How it can be recovered now?. There were almost 200 URL being configured in the server through controller(not from yml file).
Hi All,  I am running a below query:  index=xyz sourcetype=abc  | dedup _raw| timechart span=1m count  and what I could see is that the label in the X-axis is always in the below format:  tim... See more...
Hi All,  I am running a below query:  index=xyz sourcetype=abc  | dedup _raw| timechart span=1m count  and what I could see is that the label in the X-axis is always in the below format:  timechart below:  We want date parameter before the month (in AU format) which will be Tue 19 Jan 2021.  Inspite of using Strftime or fieldformat, I am not able to change this label format.  Can anybody please help me out on this?  @woodcock : Hi woodcock! I remember you responded to a query in a similiar lines sometime before, but I wasn't able to find that response now.. Need your inputs please ! Please do let me know in case of any queries  Thanks  AG.      
I've checked this, but it hasn't solved the problem for me: https://community.splunk.com/t5/Getting-Data-In/Is-it-possible-to-run-a-curl-command-on-a-dbxquery/m-p/500081#M85221 This is my curl reque... See more...
I've checked this, but it hasn't solved the problem for me: https://community.splunk.com/t5/Getting-Data-In/Is-it-possible-to-run-a-curl-command-on-a-dbxquery/m-p/500081#M85221 This is my curl request:     curl -u username:password -k https://192.168.xx.xxx:xxxx/services/search/jobs -d search=" | dbxquery query=\"select (select sum(bytes) from dba_data_files)+(select sum(bytes) from dba_temp_files)-(select sum(bytes) from dba_free_space) total_size from dual\" connection=\"XXX\""     And I get an SID back:     <?xml version="1.0" encoding="UTF-8"?> <response> <sid>1611013146.153172</sid> </response>      However when I try fetching the results, I get nothing back:     [user.name@host ~]$ curl -u username:password -k https://192.168.xx.xxx:xxxx/services/search/jobs/1611013146.153172/results/ --get -d output_mode=csv [user.name@host ~]$     I've tried waiting a few minutes in between fetch attempts, still nothing. This same query works find and returns a result immediately when run from the DBX UI:             Is there something I'm missing here in order to get the result via the REST API? Thanks.
Hello,  I want to know if there is a way to add classification banner on the PDF which is exported from splunk dashboard. The requirement is to display a security access level on the top and bottom ... See more...
Hello,  I want to know if there is a way to add classification banner on the PDF which is exported from splunk dashboard. The requirement is to display a security access level on the top and bottom of each  page of the exported pdf. I was able to display the security access level by setting the access level message as the dashboard description and that way it could be print it on the header and footer of each page of the pdf. But now I also need to put other information in the dashboard description which making it to big and it is getting cut off. I was wondering if there is other ways I can modify the exported PDFs other than modifying the alert_action.conf file? is it possible to add any other custom text to the pdfs header and footer other than description, title, timestamp, pagination.   Another question, is it possible to have a default description option that gets assigned to a dashboard when it gets created? Thanks 
Hello, I am trying to install eStreamer eNcore for Splunk.  Version 4.0.9  During the set up process I cannot see the set up link in the apps area.  See picture.  It is referred to by videos and doc... See more...
Hello, I am trying to install eStreamer eNcore for Splunk.  Version 4.0.9  During the set up process I cannot see the set up link in the apps area.  See picture.  It is referred to by videos and documents. What I see: What I should see per instructions:   Thank you in advance. James  
Greetings all, I'm in a situation where I would like do "offline" Windows event logs analysis, and I need to be able to ingest raw evtx files. Here is my setup: Deployed a Windows Splunk instance... See more...
Greetings all, I'm in a situation where I would like do "offline" Windows event logs analysis, and I need to be able to ingest raw evtx files. Here is my setup: Deployed a Windows Splunk instance on a single VM, Installed and configured the Splunk Add-on for Microsoft Windows TA. I'm ingesting the files I need with a "monitor" stanza in the Windows app's inputs.conf: [monitor://C:\imported_data\evtx] disabled = 0 sourcetype = preprocess-winevt crcSalt = <SOURCE> index = imported-evtx Now, the logs are ingested and parsed and it's already a start (I get proper sourcetypes and everything). However, they do not go through the Windows' app normalizing process, e.g. events don't get populated with the "EventID" field, user names are not parsed into SubjectUserName and TargetUserName fields, things like that.   Is there a way of making those imported logs properly handled by the TA? Note: if I try and ingest my local VM's logs with a [WinEventLog://Security] stanza, they are successfully normalized by the app.   Cheers, Erad
I have a pretty simple statistics panel that just lists out my sources by a count of error logs for the past duration of the dashboard. It has drilldown applied so whichever source you pick loads a s... See more...
I have a pretty simple statistics panel that just lists out my sources by a count of error logs for the past duration of the dashboard. It has drilldown applied so whichever source you pick loads a search on all the errors from that source. Is it possible to add a line to the source code so the text doesn't use the blue hyperlink color and apply your own with a hex color code?
Hello everyone I have a problem with cisco estreamer logs: data Apparently there is an intermittence with the sending of logs, a couple of weeks ago the cisco certificate was configured and the l... See more...
Hello everyone I have a problem with cisco estreamer logs: data Apparently there is an intermittence with the sending of logs, a couple of weeks ago the cisco certificate was configured and the logs began to arrive, after a while they stopped. When we went to review, apply a restart to the indexer and the logs began to arrive. we have had this problem for 3 weeks, sometimes when restarting the indexer no logs are received. Anyone ever happened something similar? or what may be happening. thanks
Hi All, I kindly request your help to get fields extracted from database column. I'm working on splunk db-connect app. Can anyone please provide me sample sql query to extract subfields from status ... See more...
Hi All, I kindly request your help to get fields extracted from database column. I'm working on splunk db-connect app. Can anyone please provide me sample sql query to extract subfields from status field? For e.g. I would need something like msg=login failed, host and ip fields to be extracted from below unique database records.  Sample Database output with unique records from splunk db-connect app:  Date   User  Input        Status   xxx     abc      123        login failed...                                             host=xyz |                                             ip=0.0.0.0 |  yyy      xyz      456       login successful  zzz     pqr       789       host=xyz |                                            ip=0.0.0.0 |   Appreciate your help!!                                         
Hi All, I have one requirement . Below is my query : index="abc" sourcetype="xyz" id="*-develop--system" (OrgFolderName ="gcp") bugs="*" | table bugs _time| sort _time bugs               | _time ... See more...
Hi All, I have one requirement . Below is my query : index="abc" sourcetype="xyz" id="*-develop--system" (OrgFolderName ="gcp") bugs="*" | table bugs _time| sort _time bugs               | _time 1110                     2021-01-11 13:11:04 2301                       2021-01-12 13:12:52 4556                      2021-01-13 13:09:32 1009                       2021-01-14 13:10:31 3214                    2021-01-15 13:11:12 5005                       2021-01-16 13:09:23 3009                         2021-01-17 13:09:58 My requirement is I want to display the the data in single value format with trend Indicator. Suppose I select yesterday so it should show 3009 as the value. Now suppose I select last 7 days so it should show the average of bugs in single value and trend Indicator for first value and last value. Can someone guide me what changes I need to make in my query.
My search returns a table of a count of ip addresses that have hit our system in a given search period. I am trying to determine what the earliest time and most recent time was for each ip address. ... See more...
My search returns a table of a count of ip addresses that have hit our system in a given search period. I am trying to determine what the earliest time and most recent time was for each ip address. index=myIndex  host=mySrvr sourcetype=mysource | stats count by s_ipad, r_ip_country,  |Fields s_ipad, r_ip_country. min(_time),max(_time) count | search count>=15 |sort -count The table of data returns the top 15 ip address and country of origin, however the min(_time) and max(_time) are empty. Any help would be appreciated. Thanks.
Hello, I want to be able to create/open ServiceNow tickets from Splunk. What are the steps I need to take? I am a beginner with this integration.
Hi all, I have an architecture with a search head cluster (3 members) and and 2 indexers, that are not in cluster. Which is the best way to turn the 2 indexers in a indexer cluster and then add it ... See more...
Hi all, I have an architecture with a search head cluster (3 members) and and 2 indexers, that are not in cluster. Which is the best way to turn the 2 indexers in a indexer cluster and then add it to the search head cluster?   Thanks in advance.
Can we create a WCF service in .Net and call it in Splunk to get the data from the WCF service? If yes, can you please let me know the steps?
platform: splunk cloud Lookup table: foo Field in lookup table: user I want to run a search on lookup "foo" by the "user" field with any of the value matching "brady_*" Would someone assist me wi... See more...
platform: splunk cloud Lookup table: foo Field in lookup table: user I want to run a search on lookup "foo" by the "user" field with any of the value matching "brady_*" Would someone assist me with the search query? Thank you in advance
Hello there, Is there any way to either configure an alert when the heavy forwarders are not sending logs in splunk cloud? Some sort of regime where it would routinely check to confirm that logs are... See more...
Hello there, Is there any way to either configure an alert when the heavy forwarders are not sending logs in splunk cloud? Some sort of regime where it would routinely check to confirm that logs are being sent by the heavy forwarders as an automated process?   Many thanks
Hello Within Enterprise Security I have this as the beginning part of my correlation search: | from inputlookup:access_tracker I can't seem to find where the contents of this lookup table is. I've... See more...
Hello Within Enterprise Security I have this as the beginning part of my correlation search: | from inputlookup:access_tracker I can't seem to find where the contents of this lookup table is. I've gone into SETTINGS < Lookups < and gone through the "lookup table files" and "Automatic Lookups" but could not find anything for access_tracker. Ideas?  
Hi @MuS  Sorry for the direct contact, I hope it's ok to ask you a question about "Add-on Debug Refresh". I have used it for years and it's brilliant, however, I have just moved to a cluster(Christ... See more...
Hi @MuS  Sorry for the direct contact, I hope it's ok to ask you a question about "Add-on Debug Refresh". I have used it for years and it's brilliant, however, I have just moved to a cluster(Christmas 2020) 1 SH, 1MN, and 3 INDEXERS but I have your app installed on the search head, but I keep getting this when running it. "External search command 'refresh' returned error code 1. ." I have the permissions set to read and write form ADMIN and I am logged in as ADMIN. Any ideas, what I might need to do as I really love this tool. Robbie